Why robots can't be trusted with weapons
THE idea that robots might one day be able to tell friend from foe is deeply flawed, says roboticist Noel Sharkey of the University of Sheffield in the UK. He was commenting on a report calling for weapon-wielding military robots to be programmed with the same ethical rules of engagement as human soldiers.
The report (www.tinyurl.com/roboshoot), which was funded by the Pentagon, says firms rushing to fulfil the requirement for one-third of US forces to be uncrewed by 2015 risk leaving ethical concerns by the wayside. "Fully autonomous systems are in place right now," warns Patrick Lin, the study's author at California State Polytechnic in San Luis Obispo. "The US navy's Phalanx system, for instance, can identify, target, and shoot down missiles without human authorisation."