Looking forward to a future of autonomous robotic technology, many roboticists are realizing autonomous robots will inevitably find themselves in situations that require a moral judgment. Such robots could include health aide robots having to make a serious treatment decision on their own, an autonomous car finding a way to avoid an accident that injures or kills the least number of people, or a military robot given the ability to kill on the battlefield.
Seeking to create guidelines for solving these and many other moral quandaries without human guidance, roboticists and computer scientists are turning to philosophers, psychologists, linguists, lawyers, theologians, and human rights experts.
Some experts are extremely optimistic robots will become strong moral reasoners. Matthias Scheutz of the Human Robot Interaction Laboratory at Tufts University believes robots will eventually be better and more consistent at making moral judgments than human beings.
However, others are dubious of the idea of giving robots the authority to make moral decisions, especially in matters of life and death. Speaking to the United Nations last year about the prospect of autonomous military robots, Peter Asaro of Stanford University's Center for Internet and Society said machines are not "capable of considering the value" of a human life and giving them the ability to kill under the law would be an affront to human dignity.
From The New York Times Magazine
View Full Article - May Require Paid Subscription
Abstracts Copyright © 2015 Information Inc., Bethesda, Maryland, USA
No entries found