Autonomous systems must be designed and deployed very carefully or they could develop antisocial and potentially harmful behavior, according to a study published in the Journal of Experimental & Theoretical Artificial Intelligence.
"When roboticists are asked by nervous onlookers about safety, a common answer is 'We can always unplug it!'" notes study author Steve Omohundro. "But imagine this outcome from the chess robot's point of view. A future in which it is unplugged is a future in which it cannot play or win any games of chess." Omohundro speculates a rational machine could practice harmful and antisocial behaviors such as self-protection, resource acquisition, higher efficiency, and self-enhancement.
Omohundro's study points out designing more rational systems to protect against hackers and malfunctions is more complicated. "Harmful systems might at first appear to be harder to design or less powerful than safe systems," Omohundro says. "Unfortunately, the opposite is the case. Most simple utility functions will cause harmful behavior and it is easy to design simple utility functions that would be extremely harmful."
The study suggests designing and deploying such technology should begin with developing provably safe systems that are then applied to all future autonomous machines.
From AlphaGalileo
View Full Article
Abstracts Copyright © 2014 Information Inc., Bethesda, Maryland, USA
No entries found