acm-header
Sign In

Communications of the ACM

ACM News

The Eyes Have It


View as: Print Mobile App Share:

Biometric interfaces continue to advance, and will leap beyond smartphones; homes, smart TVs, and even automobiles could incorporate gaze and gesture controls.

Credit: Hewlett Packard

One of the things that makes biometrics so powerful is that it links humans and machines in a natural way. Instead of tapping out cumbersome passwords or PINs, a physical characteristic such as a face scan or fingerprint becomes the method of authentication. It's convenient, fast, and typically secure.

Yet biometrics is emerging as more than an authentication tool. Researchers are exploring ways to build it into device interfaces, particularly on smartphones. The ability to read eye movements, facial expressions, and other physical characteristics may lead to significant advances in interfaces in the coming years.

"It's possible to greatly enhance the way we interact with devices through biometrics," says Karan Ahuja, a doctoral student at Carnegie Melon University (CMU) and  volunteer co-editor-in-chief of XRDS: Crossroads, the ACM magazine for students. "Embedding the technology into user interfaces could make many actions more natural and instinctive."

Ahuja and a team of researchers at CMU have developed a gaze-tracking tool called EyeMU that allows users to control their devices without having to lift, or apply, a finger. Meanwhile, another group in Japan has developed a system called ReflecTouch that reads the light striking one's pupils and adjusts the interface automatically.

"The integration of biometrics into experiences isn't just about improving interfaces, but human experiences in general," says Eugenio Santiago, senior vice president of user research at Brooklyn, NY-based digital design and consulting firm Key Lime Interactive. "It introduces an opportunity for greater personalization and contextual relevance."

Hands Off

Despite the powerful capabilities of today's smartphones, the functionality and usability of these devices remain somewhat stunted. For example, features like Siri do no good when a person is scrolling through photos or text messages. There's no way to escape swiping and flicking each image or object to arrive at the next one.

Larger devices and increasingly complex apps have also made it difficult to manage tasks—especially if a person has only one hand available. "There's a need to provide a user interface that is easy to use in a variety of situations. Biometrics can serve as a trigger and the sensors built into phones can capture the necessary information," says Xiang Zhang, a graduate student and researcher at Keio University in Yokohama, Japan.

The ReflecTouch system that Zhang and fellow researchers developed uses the selfie camera on a standard iPhone to detect light reflecting from a user's pupils. As the data flows through a machine learning algorithm, the phone automatically adjusts the interface to accommodate the task at hand. Depending on how a user is holding the phone in relation to his or her eyes, buttons and alignment may change.

"Our method…uses a combination of classical computer vision techniques and machine learning, and then detects grasp posture using a CNN (convolutional neural network)," Zhang says.

The EyeMU system combines gaze control and hand gestures to simplify navigation. It relies on various sensors built into the phone, along with eye tracking, to anticipate what a user wants to do. For instance, if the camera detects a person gazing at a notification or alert for a few seconds, a hand gesture—in this case a flick to the right—snoozes it, while a flick to the left dismisses it.

"The goal is to make it possible to navigate through actions in an intuitive way," Ahuja says. "Using gaze along with other motions, it's possible to review emails and text messages, news, photos, and much more."

A Vision and a View

While biometric interfaces aren't quite ready for prime time, they are inching closer to commercial reality. The CMU group, for example, reported achieving gesture classification accuracy of 97.3%, while the group from Keio University, Tokyo University of Technology, and Yahoo Japan Corporation achieved 85% accuracy.

Ultimately, an accuracy level above 99% is needed, Ahuja says. "The user experience has to be fluid and natural. People have very little tolerance for errors." On a technical level, there's a need for further advances in cameras—including depth capabilities—and improvements in processing speed and algorithms. "Any system must work across dramatically different situations, including lighting," he says.

Not surprisingly, there are some concerns about biometric interfaces, many of which revolve around security and privacy, and how the technology could be used and abused by advertisers and others. "If you are interested in understanding how people react to an experience or an interface, biometrics certainly can provide a window into that," Santiago says.

Nevertheless, biometric interfaces continue to advance, and they will leap beyond smartphones. Homes, smart TVs, and even automobiles—some of which already detect when a person is snoozing off—could incorporate gaze and gesture controls. "We can personalize things today, but 'work' is required for us to set it up," Santiago says.  "Biometrics opens the possibility for personalization to 'just happen.'"

Samuel Greengard is an author and journalist based in West Linn, OR, USA.


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account