Tuesday, December 02, 2008

Teaching robots right from wrong

Cornelia Dean writes a brief article on people trying to develop intelligent battle robots that can behave more ethically in the battlefield than humans currently can. It focuses on the work of Ronald Arkin at Georgia Tech.
In the heat of battle, their minds clouded by fear, anger or vengefulness, even the best-trained soldiers can act in ways that violate the Geneva Conventions or battlefield rules of engagement. Now some researchers suggest that robots could do better...some of the potential benefits of autonomous fighting robots: For one thing, they can be designed without an instinct for self-preservation and, as a result, no tendency to lash out in fear. They can be built without anger or recklessness, Dr. Arkin wrote, and they can be made invulnerable to what he called “the psychological problem of ‘scenario fulfillment,’ ” which causes people to absorb new information more easily if it agrees with their pre-existing ideas.

Dr. Arkin’s approach involves creating a kind of intellectual landscape in which various kinds of action occur in particular “spaces.” In the landscape of all responses, there is a subspace of lethal responses. That lethal subspace is further divided into spaces for ethical actions, like firing a rocket at an attacking tank, and unethical actions, like firing a rocket at an ambulance....because rules like the Geneva Conventions are based on humane principles, building them into the machine’s mental architecture endows it with a kind of empathy. He added, though, that it would be difficult to design “perceptual algorithms” that could recognize when people were wounded or holding a white flag or otherwise “hors de combat.”
Noel Sharkey, a computer scientist at the University of Sheffield in Britain, has written that this is not a ‘Terminator’-style science fiction but grim reality. He would ban lethal autonomous robots until they demonstrate they will act ethically, a standard he said he believes they are unlikely to meet. Meanwhile, he said, he worries that advocates of the technology will exploit the ethics research to allay political opposition.

Daniel C. Dennett, a philosopher and cognitive scientist at Tufts University:
“If we talk about training a robot to make distinctions that track moral relevance, that’s not beyond the pale at all,” he said. But, he added, letting machines make ethical judgments is “a moral issue that people should think about.”

No comments:

Post a Comment