There is scarcely a part of the human body that has not been harnessed in some fashion to help control computing devices or to authenticate users.
Brain-computer interfaces, for instance, tap our thoughts for both machine and game control, while gaze-trackers capture our eyeball motion to sense what's grabbing our attention, and face and fingerprint recognition systems help confirm our identities.
Now, engineers exploring the field of human-computer interaction (HCI) have found yet another part of the body for our electronic devices to exploit: our teeth.
This might sound a peculiar choice, but this is not the work of one outlier research team. At three virtual conferences in September, three separate research teams revealed early versions of tooth-based technologies. These three systems, respectively, allow users to:
The first idea, called EarSense, was presented at September's ACM SIGMOBILE Mobicom 2020 event by Jay Prakash of the University of Illinois at Urbana-Champaign. The technology Prakash and his team have developed re-engineers a smartphone's earpiece so it can work as a microphone as well as a loudspeaker, allowing it to detect when people tap their teeth together a number of times, or gently slide their teeth over each other.
Because the sound of tooth clicks and sliding sequences reverberate through the skull and jawbone (or mandible) up to the ear drum (in an acoustic phenomenon called cranio-mandibular transmission), the sounds can activate commands in, say, music and audiobook apps, perhaps clicking teeth twice for playback, three times to skip a track, and a slide of the teeth indicating playback should rewind a set number of seconds.
To make a tiny earpiece loudspeaker operate as a microphone, Prakash had to undertake a bit of software surgery on the diminutive soundcard inside the earpiece. "Typically, soundcards support toggling to operate in microphone or speaker mode, and this method of to-and-fro conversion is known as jack-retasking," he says. "We exploited this feature in Realtek soundcards to convert an earphone into an input transducer. By default, the system acts as a microphone and the moment there is a request to play music or make a call, it turns to speaker mode."
In tests with 18 volunteers using EarSense on Android phones, the Urbana-Champaign team has identified seven highly recognizable combinations of tooth-click and sliding "gestures" that can be used to drive apps, like a dental variant of Morse code. "This enables a new form of contactless user interface where a user can scroll, click, type, pause, et cetera, simply using their teeth," the Urbana-Champaign team wrote in its Mobicom paper. Because the sound is contained within the mouth and jawbone, it is not a threat to user privacy, they write; "The command cannot be heard or seen by anyone." One application they predict for EarSense would be in allowing users to inconspicuously tap out the numbers of a received two-factor authentication code using their teeth, without having to poke around on the phone.
EarSense is not yet ready for prime time, however: Prakash and colleagues are working on getting their system to sense tooth gestures while music or other audio is being played, instead of toggling between playback and gesture reception. "As of now, both cannot happen simultaneously. If music is being played at any time, the teeth gesture function won't work. We are now working on realizing full-duplex mode," said Prakash.
One dentistry expert is guardedly impressed. "The idea of EarSense as a proof of concept is really, really interesting," says Dr. Damien Walmsley, professor of prosthetic dentistry at the University of Birmingham in the U.K., and a British Dental Association expert on smart dental devices. Walmsley says that as people already damage their teeth due to excessively grinding them in their sleep, the EarSense team should first assess how often their system might be used, and perhaps limit its use to short periods.
Meanwhile, at Ubicomp 2020, ACM's international joint conference on pervasive and ubiquitous computing, Daibo Liu and colleagues at Hunan University in Changsha, China, Jie Xiong at the University of Massachusetts Amherst, and Zhichao Cao at Michigan State University, revealed that they have come up with an extra, tooth-based layer of biometric security for smartphones. The reason? "We store a lot of sensitive personal data in our phones, and user authentication is the most critical measure we can take to protect this information," says Liu, noting that current phone security methods based on facial recognition have been spoofed with anti-surveillance prosthetic masks, and fingerprint sensors with wax copies of authorized users' prints.
Their answer to this is to add SmileAuth, an additional identification system for phone users, which creates a personal biometric signature hash from the profile of the ridge line of your teeth (the "dental edge") plus tooth size, shape, position, and, crucially, the tooth surface texture, created by what you eat and how you chew. Yes, with this system you are, indeed, being authenticated by the skin of your teeth.
To use SmileAuth, users grin broadly to reveal their teeth, holding them close to the phone camera lens; an Android app authenticates or rejects you. It seems to work well: in tests on 300 volunteers, the system proved 99.74% accurate and could even differentiate between twins, the team reports. Security-wise, SmileAuth even resists attacks in which someone video-records your smile surreptitiously and attempts to play it back into your phone after stealing it. The SmileAuth algorithm ensures "tooth texture features effectively resist video playback attacks," says Liu.
SmileAuth could see light of day as an actual product as, at the same time as their biometric algorithms are being optimized, says Liu, the researchers "are now in cooperation with local companies to test the system in a larger population." However, they will have to take into account the diets of users; "The thing about teeth is that they change over time, especially for people who like fizzy drinks and an acid-rich diet," says Walmsley.
Such diets lead to more fillings, crowns, and damage to the sheen or texture of the outer covering of the teeth, changing the data SmileAuth acquires. "So they're going to have to watch the dietary habits of the people who use this," Walmsley cautions.
It's keeping track of those acid levels and other chemical health metrics in the mouth that holds the interest of Katia Vega and her team at the University of California, Davis, in Irvine, CA, and the U.K.'s Imperial College, London. At the International Symposium on Wearable Computing (ISWC), "collocated" with Ubicomp, Vega et al unveiled BraceIO, a novel smartphone-mediated take on the dental braces people wear to straighten errant teeth, or to fix their bite, or fill gaps.
"We usually have to take blood tests to know what is going on with our health, but what if the common ways in which we modify our bodies, such as using braces, can be used as an interactive device that displays our health information?" asks Vega. In ongoing work, Vega and her team are investigating how to populate braces with emerging breeds of color-changing biosensors that reveal the state of somebody's oral health.
How? By pointing a portable Bluetooth microscope at the brace in the mouth, color data from a biosensor on each tooth is beamed to a phone app which provides key data such as the pH of saliva, and the levels of chemicals in the body that help doctors diagnose periodontal, metabolic, kidney or cardiovascular conditions.
It's not entirely clear why there is suddenly so much interest in harnessing our dental assets for technological effect, but teeth are definitely being seen as a hot new technology area, especially for wearables, Vega says.
Yet as Walmsley points out, teeth are a sensitive body structure that liberties should not be taken with, so eager HCI researchers will have to take care not to bite off more than they can chew.
Paul Marks is a technology journalist, writer, and editor based in London, U.K.
No entries found