Researchers continue to work to make robots safer to be around humans, but some robot experts say the key is to stop making robots that lack ethics. "If you build artificial intelligence but don't think about its moral sense or create a conscious sense that feels regret for doing something wrong, then technically it is a psychopath," says Josh Hall, author of "Beyond AI: Creating the Conscience of a Machine."
For years, science fiction author Issac Asimov's Three Laws of Robotics have been used for a robot's behavior guidelines. The three laws are a robot may not injure a human being or allow one to come to harm, a robot must obey orders given by human beings, and a robot must protect its own existence. However, as robots are increasingly incorporated into the real world, some believe that Asimov's laws are too simplistic.
Robo-ethicists want to develop a set of guidelines that outline how to punish a robot, decide who regulates robots, and even create a "legal machine language" to help police the next generation of intelligent automated devices. Willow Garage research scientist Leila Katayama says even if robots are not completely autonomous, there needs to be a clear set of rules governing responsibility for their actions. The next generation of robots will be able to make independent decisions and work relatively unsupervised, which means rules must be established that cover how humans should interact with robots and how robots should behave, robo-ethicists say.
From Wired News
View Full Article
Abstracts Copyright © 2009 Information Inc., Bethesda, Maryland, USA
No entries found