acm-header
Sign In

Communications of the ACM

Research highlights

Technical Perspective: A Graphical Sense of Touch


One of the major innovations in computing was the invention of the graphical user interface at MIT, SRI, and Xerox PARC. The combination of computer graphics hardware with a mouse and keyboard enabled a new class of highly interactive applications based on direct manipulation of on-screen displays.

It is interesting to reflect on the relative rate of advance of input and output technology since the very first systems. At the time, graphics hardware consisted of single-bit framebuffers outputting to black-and-white displays.

Moving forward, we now have flat-panel, high-definition, full-color displays driven by inexpensive, high-performance graphics chips capable of drawing three-dimensional virtual worlds in real time.

Graphics hardware draws tens of millions of polygons and tens of billions of pixels per second. In comparison, most personal computers ship with a mouse and keyboard similar to that used at Xerox PARC in the 1970s.

The lack of progress in input technology has caused computers to become sensory deprived. They can output incredible displays of information, but they receive almost no information from their surrounding environment.

Contrast this situation to a living organism. Most organisms have extraordinary abilities to sense their environment, but limited ability to display information (except by movement; a few animals like the chameleon and the cuttlefish can change their skin color). Perhaps this explains why we enjoy interacting with our pets more than with our computers.

Stuart Card, a Senior Research Fellow at Xerox PARC, has observed that one of the breakthrough ideas in the graphical user interface was to amplify input relative to output. One mechanism is to enable input to be on output. Examples include on-screen buttons and menus. By leveraging output technology, we augment limited input by providing context. Another strategy for enhancing input is to use pattern recognition to extract as much information as possible from the stream of sensed data.

Fortunately, this state of sensory deprivation is beginning to change.

The biggest recent development is the commercial emergence of multitouch displays. Traditional display input technology only returns a single X, Y position at a time. As a result, the user can only point to a single location at a time and, consequently, use only one finger or one hand at a time.

In a multitouch display, multiple points are sensed simultaneously. This allows the application to sense multiple fingers from both hands. This in turn makes it possible to recognize finger gestures or coordinated two-handed motion.

Successful commercial examples of multitouch displays include the Apple iPhone and the Microsoft Surface. The iPhone has a unique user interface that is enabled by an embedded multitouch display. To zoom into a map, the user simply moves their fingers apart. Beyond touch, the iPhone has several additional built-in sensory modalities, including a microphone, camera, accelerometer, and a GPS receiver and compass. Relative to a modern desktop computer, it is sensory rich and output poor.


The following paper by a team from Microsoft Research introduces a very novel way to build a multitouch interface. They modify a flat-panel display to sense touch directly.


The following paper by a team from Microsoft Research introduces a very novel way to build a multitouch interface—the ThinSight system. They modify a flat-panel display to sense touch directly. Previously, touch was sensed indirectly; for example, by mounting a camera to look at the surface of the display. The camera-based approaches require a fairly large space, have problems with occlusion, and are difficult to calibrate. In the ThinSight system, the LED backlight that drives the display is modified to include infrared LEDs and sensors interspersed amid the visible light emitters.

The display surface can both emit light and sense position. Distributing sensors throughout the display substrate yields a compact, efficient design.

This paper is important because it proposes an innovative design that addresses a long-standing problem. However, there is much more work to do in this area. The authors have added the sense of touch to the display, but like real touch sensors it has limited resolution and the sensed object must be in contact with the display. In Pierre Wellner's pioneering work on the digital desk, the computer sensed remote hand positions as well as the position, type, and content of objects on the desk. Unfortunately, Wellner's system involved bulky cameras and projectors.

Hopefully HCI researchers will expand on the innovative sensing strategy proposed in this paper. We want compact interactive devices that have rich sensory capabilities including touch, sight, and hearing.

Back to Top

Author

Pat Hanrahan is the CANON Professor of Computer Science and Electrical Engineering in the Computer Graphics Laboratory at Stanford University, Stanford, CA.

Back to Top

Footnotes

DOI: http://doi.acm.org/10.1145/1610252.1610276


©2009 ACM  0001-0782/09/1200  $10.00

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2009 ACM, Inc.


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account
Article Contents: