In interviews, Stanford University scholars discuss the challenging ethical ramifications of driverless vehicles, one of which concerns how automakers or policymakers can ensure public safety.
Stanford professors Ken Taylor and Rob Reich question whether artificial intelligence can replace humans as moral decision-makers. "Will these cars optimize for overall human welfare, or will the algorithms prioritize passenger safety or those on the road?" Reich asks.
Stephen Zoeph with the Center for Automotive Research at Stanford (CARS) says a more pressing issue is centered on what level of danger society can willfully accept with driverless cars, and his CARS team is focusing on programming ethical behavior into automobiles.
Some scholars see a need for the design of self-driving vehicles and their underlying algorithms to incorporate more transparency. However, all scholars are united in their push for greater interdisciplinary collaboration that includes social and ethical researchers developing driverless cars and other revolutionary technology.
From Stanford News
View Full Article
Abstracts Copyright © 2017 Information Inc., Bethesda, Maryland, USA
No entries found