acm-header
Sign In

Communications of the ACM

ACM News

Driverless Cars Face Laser Sabotage


View as: Print Mobile App Share:
Anatomy of a spoofing attack.

Researchers have discovered a way that hackers could attack the LiDAR used by most autonomous vehicles to sense the car's own location in three dimensions within the environment, and also obstacles such as other vehicles, pedestrians, and pets.

Credit: Yulong Cao et al

These are tough times for the developers of autonomous vehicles (AVs), and it looks like they may have just gotten tougher still. With AV makers already scrambling to prove vehicle safety in the face of widespread public distrust, following the killing of a pedestrian by an Uber Technologies AV in Tempe, AZ, in March 2018, the backers of emerging driverless technologies may now face a new threat: laser sabotage.

At issue is what's been dubbed an "adversarial sensor attack" on the light-based radar, or LiDAR, used by most AVs to sense the car's own location in three dimensions within the environment, and also obstacles such as other vehicles, pedestrians, pets, and street furniture.

To execute this photonic assault, say researchers, all a saboteur needs to do is briefly aim a specially-programmed laser attack device, which can be little bigger than a smartphone, at the spinning LiDAR sensor atop an oncoming AV. This can be done in a furtive pedestrian attack from the roadside, or perhaps from a car in an adjacent lane.

The laser pulse pattern the attacker injects into the LiDAR is designed to force the AV to suddenly perceive road obstacles that are simply not there, either sending the vehicle swerving sharply off the road in needless evasive action, or immobilizing it in an emergency stop in which it gets rear-ended. Alternatively, the vehicle might simply freeze at traffic signals, blocking traffic as it cannot evade the obstacle.

AV war gaming

What's behind this autonomous anarchy is a bout of "what if" security war gaming undertaken by a team of AV engineers at the University of Michigan in Ann Arbor (UMich). They say their laser experiments show a possible (but until now unproven) notion that LiDAR machine-learning-based perception models can be gamed to implement sabotage, and that defenses are needed.

Adversarial attacks on AVs are well-known, but until now they have been based on fooling camera-based visual perception systems into interpreting, say, a 30-mph road sign as a 60-mph sign by placing subtle arrangements of stickers on the signs to modify image recognition by machine learning networks.

Yulong Cao and colleagues at UMich wondered if any adverse effects might be possible if the LiDARs, rather than the back-end artificial intelligence (AI) behind AV cameras, were attacked instead. At November's ACM Computers and Communications Security (CCS2019) conference in London, Cao's team revealed how they got such a laser attack to work, and what AV makers might need to do to defend against them.

"As autonomous driving systems are safety-critical, and we know machine learning and deep learning models are vulnerable to adversarial machine learning attacks, we needed to investigate the real threats these vulnerabilities pose," Cao says.

Phantom menace

To generate phantom obstacles, the UMich team designed a spoofing device that uses a photodiode to sense the laser pulses emitted by an AV's LiDAR. They then designed a logic circuit that fed those signals to an infrared laser (inside the same device) that beams them back to the LiDAR at which it is aimed. The LiDAR receives the gamed light data, in addition to laser reflections from the immediate environment. The result? The researchers found the LiDAR sensor could indeed be cajoled into producing an admittedly sparse but still realizable three-dimensional volumetric point cloud, suggesting there may well be a significant obstacle ahead.

Fooling the LiDAR's machine learning perception system wasn't quite so easy as generating that point cloud; the machine learning network effectively intervened and ruled out the phantom obstacles as ignorable fakes, the researchers report in their CCS2019 paper. "The machine learning model has learned what a real obstacle like a vehicle or pedestrian looks like," says Cao. Because their LiDAR spoofing device does not have the capability to fake something as dense and complex as a real obstacle, he says, it could not easily fool the perception system.

To get around the AI's deep smarts, the team had to optimize the adversarial nature of the laser pulse patterns the attack device generates in such a way that the LiDAR's backend machine learning network would misinterpret them as an obstacle. "That improved the attack success rates to around 75%," according to the paper.

Handheld hackery

The spoofing device itself needs only a battery, a field programmable gate array (FPGA)-based logic circuit, a photodiode, and an infrared laser. "So the attacker is able to make it small and portable, " says Cao. "However, the spoofer device does require a minimum of precision when it is aimed at the LiDAR on the victim car to conduct the attack."

Having established the validity of the attack methodology, and the simplicity of the device that carries it out, the UMich team are now collaborating with the AV-industry-backed University of Michigan Autonomous Vehicles Test Facility, which numbers GM, Honda and Ford among its sponsors. "We are collaborating with them to help them develop defenses against this threat," says Cao. That collaboration will involve the LiDAR attack tests moving from the  lab environment to more realistic experiments on vehicles outdoors on test tracks.

Developing a robust defense against this laser attack should involve smart use of the broad suite of sensors available to AV designers, says Paul Newman, founder and chief technology officer at Oxbotica, an Oxford, U.K.-based maker of autonomous vehicle software. "Perception is a multi-modal problem, or at least it should be. False positives are difficult to deal with if you are only seeing one way, but if you see with vision, LiDAR, and radar, you can look for consensus across the three independent modalities," he says.

"So with this 'in principle' laser attack, our badly behaving adversary would need to fake data in two out of the three modalities, and have those fakes be mutual supporting and coherent in space and time. That suddenly makes a successful attack much, much, much harder."

Paul Marks is a technology journalist, writer, and editor based in London, U.K.


Sidebar
Message Translator Speeds Attacks On CAN Bus Cars

If news of the LiDAR laser attack on autonomous vehicles (AVs) left drivers of standard human-driven cars feeling smug, another University of Michigan in Ann Arbor (UMich) research team had some bad news at November's ACM CCS2019 conference.

A team led by Mert Pesé and Kang Shin introduced LibreCAN, a proof of principle of an automated method that would allow car hackers to meddle with the safe operation of critical devices connected to any car's Controller Area Network (CAN) bus network.

The CAN bus connects electronic control units (ECUs) for multiple critical vehicle systems, such as those controlling braking, the engine/powertrain, and steering. Until now, however, each manufacturer has used its own version of CAN bus messaging (and kept that secret), so hackers cannot know which version a vehicle uses without extensive eavesdropping on the message frames coursing through the network. As a result, they just cannot just connect and expect a successful hack to work.

This has been seen as beneficial, because anything that slows an attacker is deemed a good deterrent. "Automakers believe in security by obscurity," says Pesé.

With his colleagues,  Pesé has shown how simple it is to develop a universal translator (which they call LibreCAN) that converts CAN bus messages written by a hacker into the precise format a particular car's bus will understand. That means hackers who decide to similarly engineer a translator of their own will not need knowledge of a vehicle's CAN message format to undertake malicious CAN injection attacks that dangerously change a car's behavior: they could undertake an effective hack in just 40 minutes, the team estimates..

"Automakers need to implement countermeasures that repel CAN injection attacks," says Pesé. "A very simple way to achieve this is to implement a firewall in the vehicle gateway that blocks write access from the OBD-II port, which is the primary attack surface for these kind of attacks. Alternatively, automakers may authenticate or encrypt CAN data."
--P.M.


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account