Robot ethics: Thou shalt not kill?

Military departments around the world are capitalising on improving robotics technology to help them with war efforts.

Where wars were once fought in hand-to-hand combat or soldiers shooting it out, the reality of wars these days mean operators in the US can decide whether people live or die in Pakistan at the touch of a button.

Robots could also one day replace humans on the battlefield, but how far away are we from this type of robotic warfare and what are the ethical implications?

Computerworld Australia also spoke to the Department of Defence about its involvement in robotics for military purposes.

The move to free-thinking robots

The US is a significant user of military drones, otherwise known as unmanned aerial vehicles. Its arsenal of drones has increased from less than 50 a decade ago to around 7000, according to a report by the New York Times, with Congress sinking nearly $5 billion into drones in the 2012 budget.

Robotics research is increasingly heading in the direction of autonomy, with the race on to create the most autonomous robot which is capable of thinking for itself and making its own decisions.

For example, robots can now play soccer against each other and be completely autonomous during a match, making their own decisions on how to play the game.

This type of autonomy could also be applied to military robots, but instead of a friendly game of soccer, theoretically, robots could be programmed to kill – at will or specific people.

Robert Sparrow, associate professor, school of philosophical, historical and international studies at Monash University, warns we are delving into Pandora’s box with autonomous military robots and there are major ethical implications.

He argues that military robots make the decision to go to war more likely as it means governments “can achieve their foreign policy goals by sending robots without taking [on] casualties,” he told Computerworld Australia.

“If you thought you were going to [have] 10,000 casualties, for instance, in going into a conflict, then you have to have a pretty good reason to do it. If you think we’ll just send half a dozen robots in and kill a lot of high valued targets, then that calculus looks very different and favours going to war.”

Current technology also means a robot could, theoretically, be armed with weapons and programmed to kill.

Mary-Anne Williams, director, innovation and enterprise research lab at the University of Technology, Sydney, says robots can be trained to kill “with surprising ease".

“They can aim, shoot and fire. Robots today have sophisticated sensory-perception and [are] able to detect human targets. Once detected, robots can use a wide range of weaponry to kill targets,” she says.

The potential for military robots to be used for morally questionable actions is spurring some academics on to call for a code of ethics to be implemented around the use of military robots.

Williams says adhering to a robot code of ethics is currently up to the individuals designing the robots, but she says there is a need for the rest of society to push for a set of guidelines which robots must adhere to.

“Robots can undertake physical action which can impact [on] people and property, so their actions must be governed by laws and a code of ethics. Robot actions can have a significant impact and lead to loss of life. Therefore robots must act in accordance with the law,” she says.

Isaac Asimov foresaw the technological reality we are now living in, detailing three laws of robotics in his book I, Robot in 1950. These laws of ethics included:

  1. A robot may not hurt or kill humans
  2. Robots must obey orders by humans
  3. A robot must protect itself, as long as protecting itself does not conflict with the first two laws of robotics.

Join the Computerworld newsletter!

Error: Please check your email address.
CIO
ARN
Techworld
CMO