In the two decades since the passing of science fiction author Isaac Asimov, his concept of robots programmed to meet certain safety standards has become a touchstone for artificial intelligence (AI) researchers.
He proposed three laws to prevent robots from harming humans, but they contain contradictions that have encouraged others to propose new rules. In a 2004 essay, Michael Anissimov of the Singularity Institute for Artificial Intelligence noted that "it's not so straightforward to convert a set of statements into a mind that follows or believes in those statements." Anissimov believes that programmers must develop "friendly AI" that loves humans rather than try to program machines with rules.
Three years ago, Texas A&M scientist Robin Murphy and Ohio State Cognitive Systems Engineering Lab director David Woods proposed three laws to govern autonomous robots, the first based on the assumption that since humans deploy robots, human-robot systems must subscribe to high safety and ethical standards. The second law states that robots must follow appropriate commands, but only from a limited number of people. The third asserts that robots must protect themselves only following the transfer of control of their operations to humans.
From Inside Science
View Full Article
Abstracts Copyright © 2012 Information Inc. , Bethesda, Maryland, USA
No entries found