acm-header
Sign In

Communications of the ACM

ACM TechNews

Voice Assistants Follow Inaudible Commands


View as: Print Mobile App Share:
Alexa, Siri, and Google Assistant.

Researchers at Germany's Horst Gortz Institute for Information Technology Security at Ruhr-University Bochum have developed exploits against voice assistants by encoding inaudible commands in songs and other audio content.

Credit: Jens Mortensen/The New York Times

Researchers at Germany's Horst Gortz Institute for Information Technology Security (HGI) at Ruhr-University Bochum have developed exploits against voice assistants by encoding inaudible commands in songs and other audio content.

HGI's Lea Schonherr concealed inaudible commands in different audio signals and tracked how the Kaldi speech-recognition software integrated in many voice assistants decoded the data, to confirm whether Kaldi understood the clandestine directions.

Such exploits could only be conducted if the doctored files were fed directly to assistants as data in the speech-recognition software, but now the attack works even when files are played through speakers.

The team now is developing countermeasures, by teaching the speech-recognition system to eliminate any parts of the audio signals that are inaudible to humans.

Said HGI's Thorsten Eisenhofer, "Essentially, the recognition is meant to work rather like the human ear, rendering it more difficult to conceal secret messages in audio files."

From Ruhr-University Bochum (Germany)
View Full Article

 

Abstracts Copyright © 2019 SmithBucklin, Washington, DC, USA


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account