16:34 01 December 2015
Some engineers in the HRI Laboratory at Tufts University in Massachusetts are programming robots to say "no" to human commands if the robot feels it will put their safety at risk.
In a video posted online, one of the technicians is seen giving the robot a series of orders including to sit down and stand up. However, when the machine was asked to walk off the edge of the table, the robot refuses. "Sorry, I cannot do this as there is no support ahead." When the technician repeated the command, the tiny robot answered: "But, it is unsafe."
The robot agreed to walk forward only after the technician promised to catch it.
The idea for the robots to assess their own safety has been developed by Gordon Briggs and Dr Matthais Scheutz.
In a paper, the pair explained: "Given the reality of the limitations of autonomous systems, most directive rejection mechanisms have only needed to make use of the former class of excuse - lack of knowledge or lack of ability.
"However, as the abilities of autonomous agents continue to be developed, there is a growing community interested in machine ethics, or the field of enabling autonomous agents to reason ethically about their own actions."