acm-header
Sign In

Communications of the ACM

ACM TechNews

A Stanford Professor's Quest to Fix Driverless Cars' Major Flaw


View as: Print Mobile App Share:
A driverless car being built at Stanford University.

A Stanford University professor questions the programming of the computer systems of automated cars to incorporate ethical decision-making.

Credit: David Paul Morris/Bloomberg

Stanford University professor Chris Gerdes is exploring the issue of programming the computer systems of automated cars with ethical decision-making, because he is skeptical of many people's assertions that the technology is ready for practical deployment.

"There's a lot of context, a lot of subtle but important things yet to be solved," Gerdes says.

One instance he cites is whether in the event of an unavoidable accident a driverless car should be programmed to make a moral choice between protecting its occupant or protecting a group of pedestrians in its path. "We need to take a step back and say, 'wait a minute, is that what we should be programming the car to think about?" Gerdes says. "Is that even the right question to ask? We need to think about traffic codes reflecting actual behavior to avoid putting the programmer in a situation of deciding what is safe versus what is legal."

Gerdes notes in terms of the driverless car technology's development, the hype over its advantages is at an apex. "The benefits are real, but we may have a valley ahead of us before we see all of the society-transforming benefits of this sort of technology," he cautions.

From Bloomberg
View Full Article

 

Abstracts Copyright © 2015 Information Inc., Bethesda, Maryland, USA


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account