acm-header
Sign In

Communications of the ACM

ACM News

Three Small Stickers in Intersection Can Cause Tesla Autopilot to Swerve Into Wrong Lane


View as: Print Mobile App Share:
In-car perspective during testing, with the interference markings circled in red.

Security researchers have demonstrated a way to use physical attacks to spoof the autopilot in a Tesla.

Credit: Tencent

An integral part of the autopilot system in Tesla's cars is a deep neural network that identifies lane markings in camera images. Neural networks "see" things much differently than we do, and it's not always obvious why, even to the people that create and train them. Usually, researchers train neural networks by showing them an enormous number of pictures of something (like a street) with things like lane markings explicitly labeled, often by humans. The network will gradually learn to identify lane markings based on similarities that it detects across the labeled dataset, but exactly what those similarities are can be very abstract.

Because of this disconnect between what lane markings actually are and what a neural network thinks they are, even highly accurate neural networks can be tricked through "adversarial" images, which are carefully constructed to exploit this kind of pattern recognition. Last week, researchers from Tencent's Keen Security Lab showed [PDF] how to trick the lane detection system in a Tesla Model S to both hide lane markings that would be visible to a human, and create markings that a human would ignore, which (under some specific circumstances) can cause the Tesla's autopilot to swerve into the wrong lane without warning.

Usually, adversarial image attacks are carried out digitally, by feeding a neural network altered images directly. It's much more difficult to carry out a real-world attack on a neural network, because it's harder to control what the network sees. But physical adversarial attacks may also be a serious concern, because they don't require direct access to the system being exploited—the system just has to be able to see the adversarial pattern, and it's compromised.

 

From IEEE Spectrum
View Full Article


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account