acm-header
Sign In

Communications of the ACM

ACM TechNews

Listen to Me: Machines Learn to ­nderstand How We Speak


View as: Print Mobile App Share:
The latest update of iOS 9 added to Siri's voice recognition capabilities.

Your smartphone is learning to better understand your voice commands.

Credit: Karlis Dambrans/Flickr

During its recent World Wide Developer Conference, Apple announced additional features it was adding to the voice recognition capabilities of its Siri personal assistant app as part of its most recent iOS 9 update.

One of Siri's new abilities is a limited capacity to understand context, for example telling Siri to "remind me of this" while viewing a Facebook invite.

Voice recognition software has come far in recent years, especially with the introduction of natural-language processing and artificial neural networks that can be trained to recognize language. Google most recently reported error rates of less than 8 percent.

However, major challenges still exist, writes Central Queensland University senior lecturer Michael Cowling. He notes pronunciation is a significant problem, especially in a language such as English where pronunciation does not always align perfectly with spelling and can vary depending on a person's accent or dialect. Software also struggles to pick up on contextual clues people easily recognize.

Still, Cowling says progress is being made every day. Both Microsoft and Google recently revealed impressive advancements in automatic translation. Google, for example, unveiled technology that combines image or voice recognition, natural-language processing, and a smartphone camera to automatically translate signs or short conversations.

From The Conversation
View Full Article

 

Abstracts Copyright © 2015 Information Inc., Bethesda, Maryland, USA


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account