Opinion, Politics & society, Research, Technology & engineering, Voices

Beyond drone warfare: Prof warns of ‘automated killing machines’

In an op-ed piece for the science journal Nature, Berkeley professor Stuart Russell, an expert in artificial intelligence, warns that automated weapons systems "could violate fundamental principles of human dignity by allowing machines to choose whom to kill," and calls on his colleagues to take a stand.

Stuart Russell

Stuart Russell

The acronym may sound comforting, but LAWS stands for lethal autonomous weapons systems — systems that, unlike remotely piloted drones, select and engage targets without human intervention. Such artificial-intelligence weaponry is “feasible within years, not decades,” warns Stuart Russell, a UC Berkeley professor of computer science, and “the stakes are high: LAWS have been described as the third revolution in warfare, after gunpowder and nuclear arms.”

In an op-ed piece for the science journal Nature, Russell, an expert in artificial intelligence, outlines the debate over the use of AI weapons systems, and notes widespread agreement on the need for “meaningful human control” over targeting and engagement decisions. “Unfortunately,” he adds, “the meaning of ‘meaningful’ is still to be determined.”

“Some argue that the superior effectiveness and selectivity of autonomous weapons can minimize civilian casualties by targeting only combatants,” writes Russell, one of four leading researchers invited by Nature to weigh in on the question. “Others insist that LAWS will lower the threshold for going to war by making it possible to attack an enemy while incurring no immediate risk, or that they will enable terrorists and non-state-aligned combatants to inflict catastrophic damage on civilian populations.”

MQ-9 Reaper in flight

MQ-9 Reaper in flight (U.S. Air Force photo)

“The capabilities of autonomous weapons will be limited more by the laws of physics — for example, by constraints on range, speed and payload — than by any deficiencies in the AI systems that control them,” Russell writes. Moreover, “LAWS could violate fundamental principles of human dignity by allowing machines to choose whom to kill — for example, they might be tasked to eliminate anyone exhibiting ‘threatening behavior.’ The potential for LAWS technologies to bleed over into peacetime policing functions is evident to human-rights organizations and drone manufacturers.”

Such decisions, he argues, are far too important to be left to machines. “The AI and robotics science communities, represented by their professional societies, are obliged to take a position,” he writes, “just as physicists have done on the use of nuclear weapons, chemists on the use of chemical agents and biologists on the use of disease agents in warfare.”

Accompanying the op-ed is an audio interview in which Russell stresses the urgency of human oversight. “It’s a very easy step to go from remotely piloted drones to fully autonomous weapons,” he says. “The AI community may be too late. We might decide that we don’t like our technology being used to kill people, but we may not have a say in it.”

Read the op-ed, and hear Russell’s six-minute interview, on the Nature website.