acm-header
Sign In

Communications of the ACM

ACM TechNews

Deep Learning: Achilles Heel in Robo-Car Tests


View as: Print Mobile App Share:
Deep Learning, illustration

Credit: iStockPhoto.com

Machine learning (ML) is a serious impediment to the performance of autonomous cars in safety tests, according to Carnegie Mellon University professor Philip Koopman. "Mapping machine learning-based systems to traditional safety standards is challenging because the training dataset does not conform to traditional expectations of software requirements and design," Koopman says.

Koopman says if the U.S. Department of Transportation's Federal Automated Policy described ML as an unusual, emerging technology, it would spur regulators to ask more focused questions on ML in their safety evaluations. Experts agree there currently is no way to truly test ML systems.

Among the areas Koopman thinks regulators should include in their assessment of safety in ML-based driverless cars are representativeness of data, overfitting, testing environment validation, and analysis of brittleness. "It is essential that ISO 26262 style safety engineering be performed," he says. "Within that context, ML datasets either need to be credibly mapped into the standard's framework, or something additional must be done beyond ISO 26262 for ML validation." Koopman also argues for a mandatory requirement of safety tests from independent third parties, while the policy should be redrafted without the loopholes corporations exploit to avoid re-evaluation of safety-critical functions.

From EE Times 
View Full Article

 

Abstracts Copyright © 2016 Information Inc., Bethesda, Maryland, USA


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account