acm-header
Sign In

Communications of the ACM

News

Making Sense of Sensors


Johnny Chung Lee at Creativity World Forum 2008

By Peter Baert

When Johnny Chung Lee started hacking his Nintendo Wiimote to explore whether the infrared sensors could detect simple finger movements, he scarcely expected the project to catapult him to YouTube microstardom. Yet within a few months, his four-minute demonstration video had garnered nearly two million views, earning him a loyal fan base of fellow DIY hackers, and helping him secure a plum job at Microsoft and a flattering profile in The New York Times. The Wiimote demo did more than boost Lee's job prospects, however. It also helped spark a surge of public interest in the possibilities of position sensors and gestural interfaces.

While position sensors have been around for years, dating at least as far back as the University of Illinois at Urbana-Champaign's early "dataglove" prototype in the 1970s, they have largely failed to penetrate the consumer mainstream despite periodic waves of hype. That may be starting to change, however, as evidenced by the popularity of CNN's presidential election night coverage featuring ubiquitous—some would say gratuitous—images of newscasters poking, pulling, and prodding interactive graphics on a giant "Magic Wall" (which was memorably parodied in a Saturday Night Live skit with Fred Armisen twiddling a map of the United States while declaring, "Check out Michigan—I can make it bounce!").

The CNN Magic Wall originated with Jeff Han's groundbreaking work in multitouch interaction (which also caused a YouTube splash when the first demos appeared online in 2006). In the age of the iPhone, however, multitouch screens have quickly become so commonplace as to seem almost banal. But researchers are now starting to explore more provocative visions for sensor-based applications that stretch far beyond the relatively simple manipulation of images on a two-dimensional screen. (For more about multitouch devices, see Ted Selker's article "Touching the Future" in the Dec. 2008 issue of Communications.)

At Philips Research Labs, researcher Mark Mertens recently filed a patent for a "throwable display" that updates itself based on the unit's position and trajectory. Envisioned primarily as a gaming device, the system employs a combination of an accelerometer and a triangulation system based on GHz radiation that measures flight time and location in relation to a set of fixed beacons and/or human actors. For example, the display could show a humanoid image sticking out its tongue to a player from a distance; then, when the display gets closer to the player, it might change its expression to a please-don't-hit-me smile. Once the sensors can deliver accurate data about the unit's location, orientation, and speed, says Mertens, "the rest is mathematics."


"You shouldn't just look at [displays], but interact with them physically: throw them, kick them, spin them," says Mark Mertens.


In this case, sensors are only part of the story. To stretch beyond traditional two-dimensional interactions, Mertens wants to explore new possibilities in display technology, incorporating Philips' innovative pillow-shaped display. Mertens thinks the combination of sensors and new, more flexible displays opens up all kinds of opportunities for innovation. "You shouldn't just look at [displays]," he says, "but interact with them physically: throw them, kick them, spin them."

In a sensor-equipped world, almost anything can become an interface. At Microsoft, researcher Paul Dietz is working on a new sensor technology called SurfaceWare that allows liquid containers to detect their contents and send automatic signals in response to changing conditions. In the technology's simplest application, a near-empty glass could automatically signal for a refill. Beyond realizing efficiency gains for harried bartenders, the technology holds out all kinds of possibilities for "smart" liquid containers.

While working on a predecessor project called iGlassware, Dietz used passive RFID tags married to capacitance sensors to determine the contents of a container, but that solution didn't work well with thick liquids that tend to coat a container's interior. So Dietz developed an optical prism embedded in the glass container that reflects light when in the air but not when covered with fluid, enabling the container to detect a wider range of fluids. Building on this work, Dietz is developing a new class of surface interaction widgets called dynamic tags. "Basically, these are transducers that can sense physical quantities in the environment and present the measurement in an optical form that can be read," says Dietz.

Before coming to Microsoft, Dietz worked on a series of sensor-based technologies for Mitsubishi and Disney, where he created a set of locationaware water installations, including a fountain that withdraws its stream when an observer tries to touch it, a musical harp with strings made of water, and a liquid touch screen known as the TouchPond.

Back to Top

Foldable Interfaces

While the market for sensor-based pillows, drinking glasses and water fountains remains to be proven, other researchers are working on sensor-based interfaces with more practical applications. Before his brush with YouTube fame, Lee developed a foldable interface designed to mimic one of the world's most familiar analog interfaces: the newspaper. Using an innovative approach to tracking objects with projected light, the display dynamically updates its contents based on its location and orientation. Like the Wiimote demo, this prototype relies on infrared sensors to determine the object's position. After experimenting with a number of alternative sensor technologies, Lee settled on infrared sensors like the ones found in the Wiimote.


Johnny Chung Lee's inspiration for the foldable display came from Renaissance, an animated cyberpunk/science-fiction detective film.


An alternative approach might have involved using a camera to track the display, by giving the display unit unusual optical characteristics like an orange light flashing at regular intervals. But Lee believes that camera-based displays are inherently limiting. He prefers infrared sensors because they are cheaper and more reliable.

"One of the problems of camera tracking is that if there is more than one point, the camera can't tell them apart. But a light sensor can tell them apart. With cameras, the lights are broadcasting outward, but with a projector, the light 'knows' where it came from," Lee explains. "There's relatively little infrared interference in the world. Very little else is blinking at 50 kHz."

The foldable display looks like something right out of Minority Report, the 2002 Steven Spielberg film that has served as a touchpoint for many interface designers. However, Lee says his inspiration actually came from Renaissance, an animated cyberpunk/ science-fiction detective film by Christian Volckman. Lee admits to finding inspiration throughout the pop culture world, often basing his projects on ideas found in Japanese anime and other films. "Usually I already have a list of things I'd like to do," Lee says, "but films sometimes give me the inspiration to actually do it." In a world where most people have become accustomed to the traditional display-keyboard-mouse interface, looking to films and other reference points outside the world of computer science may help developers find fertile new ground for thinking about alternative interfaces.

Some developers find inspiration in science-fiction and fantasy films. For others, the inspiration comes from hard-won experience. Dietz traces his original inspiration back thirty years to his childhood, when he and his brothers hacked a Heathkit H-8 computer kit to control a toy train set by using LEDs as light receivers and small magnetic reed switches wired into the computer. As Dietz recalls, "The train would automatically start up and go from station to station, switching the switches, displaying a live map, and even playing appropriate background music. Not bad for 1978!"

Whatever the inspiration, researchers are gradually recognizing the potential of position sensors to help them overcome the limitations of traditional user interfaces. "We are leading increasingly sedentary lives, locked with our eyes onto the very small world behind the screen of a traditional display," says Mertens, "and that is still how many computer games force you to behave."

Mertens believes the future of interaction design involves bringing the computer out from the other side of the glass, and building bridges into the physical realm. "Instead of playing behind the screen in a virtual computer world, you have to take the display into the real world," he says.

When sensors start to do more than just transmit sensory data to a traditional two-dimensional computer screen, the way we interact with computers will fundamentally shift, as physical objects become "smarter" about themselves and the world around them. When that starts to happen—when computers start taking shape in three dimensions—sensors may just start making sense.

Back to Top

Author

Alex Wright is a writer and information architect who lives and works in New York City.

Back to Top

Footnotes

DOI: http://doi.acm.org/10.1145/1461928.1461934

Back to Top

Figures

UF1Figure. Johnny Chung Lee discussing creative uses of the Wiimote at Creativity World Forum 2008.

Back to top


©2009 ACM  0001-0782/09/0200  $5.00

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2009 ACM, Inc.


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account
Article Contents: