acm-header
Sign In

Communications of the ACM

Organic user interfaces

Organic Interaction Technologies: From Stone to Skin


The mouse is the most successful and popular input device in the history of computing. However, it will never be the ultimate input device because it does not completely bring out its users' sophisticated manipulation skills. A mouse gives us control of only a single position (x,y) at any given moment in time, along with additional button presses (on/off). Feedback related to the input is normally available only as visual information. On the other hand, in physical manipulation, we easily control multiple points and continuous parameters (such as pressure) at the same time. Feedback is not limited to sight but often includes touch, sound, temperature, and even air movement. Feedback itself is also more tightly unified with input than in traditional graphical user interfaces (GUIs), where input and output are often separate. The part(s) of our body we use for interaction is not limited to fingers; the palm, arm, even the entire body are all potentially usable. Several recent approaches have sought to incorporate these human manipulation skills into human-computer interaction. I use the terms "organic" and "organic interaction" for such interfaces, because they more closely resemble natural human-physical and human-human interaction (such as shaking hands and gesturing).

The table here outlines the features of organic interaction, comparing them with the features of traditional user interfaces. Even as the number of novel interaction methods involving sensing technologies has grown, such methods have been used mainly for some special purpose (such as interactive art). Myron Krueger's "Videoplace" (bubblegum. parsons.edu) was an early example (early 1970s); in it, a video camera was used to capture a user's body silhouette, and the full-body shape, not just finger positions, are used as input to the computer system. In the next few years, as the cost of sensing and computation comes down, such organic interaction technologies are likely to be viable alternatives to traditional mouse-based interaction. Here, I explore notable examples and discuss future research topics needed to advance organic user interfaces and make them more mainstream.

Back to Top

HoloWall

HoloWall [5] is a camera-based interactive wall/table system that uses a combination of infrared (IR) camera and array of IR lights installed behind the wall (see Figure 1). The camera captures the images on the back surface of the wall (illuminated by the IR lights). An IR-blocking filter built into the LCD projector ensures that the camera is not affected by the projected image.

Since the rear-projection panel is semi-opaque and diffusive, the user's hand shape in front of the screen is not visible to the camera when the hand is far (more than 30cm) from the screen. When the user moves a finger close enough to the screen (10cm or less), the finger reflects IR light and thus becomes visible to the camera. With a simple image-processing technique (such as frame subtraction), the finger shape is separated from the background.

Using this sensing principle, HoloWall distinguishes multiple hand and finger contact points, enabling typical multi-touch interactions (such as zooming with two hands, as in Figure 1c). Moreover, it also recognizes the human hand, arm, body, physical objects (such as rods), and visual patterns (such as 2D barcodes attached to the object), as in Figure 1c and Figure 1d).

Figure 1c shows two users playing a ping-pong game using HoloWall demonstrated at the SIGGRAPH conference in 1998. Although the system was originally designed for hand and body gestures, some participants used other physical objects as instruments for interaction; the system recognizes any light-reflecting object. Such dynamic expandability is an interesting feature of organic user interfaces.

Note that a sensing principle similar to that of HoloWall is also used in other interactive-surface systems (such as the Microsoft Surface, www.microsoft.com/surface/). Perceptive Pixels [2] is another optical multi-touch input system, though it is based on a sensing principle different from the one used by HoloWall.

Back to Top

SmartSkin

SmartSkin (see Figure 2) is a multi-touch interactive surface system based on capacitive sensing [7] that uses a grid-shaped antenna to measure hand and finger proximity. The antenna consists of a transmitter and receiver electrodes (copper wires). The vertical wires are transmitter electrodes; the horizontal wires are receiver electrodes. When one transmitter is excited by a wave signal (typically several hundred kHz), the receiver receives the signal because each crossing point (transmitter/receiver pair) functions as a capacitor. The magnitude of the received signal is proportional to the frequency and voltage of the transmitted signal, as well as to the capacitance between the two electrodes. When a conductive, grounded object approaches a crossing point, it capacitively couples to the electrodes and drains the wave signal. As a result, the received signal amplitude becomes weak. Measuring this effect makes it possible to detect the proximity of a conductive object (such as a human hand).


The current level of development of organic user interfaces is the equivalent of where the mouse was when it was first invented.


Since the hand detection is done through capacitive sensing, all the necessary sensing elements can be completely embedded in the surface. Unlike camera-based systems, the SmartSkin sensor is not affected by changes in the intensity of the environmental lighting. The surface is also not limited to being flat; the surface of any object, including furniture and robots, potentially provides such interactivity, functioning like the skin of a living creature.

The system recognizes the effect of the capacitance change when the user's hand is placed 5cm–10cm from the table. To accurately determine the hand's position (the peak of the potential field), SmartSkin uses bi-cubic interpolation to analyze the sensed data. The position of the hand can be determined by finding the peak on the interpolated curve. The precision of the calculated position is much finer than the size of a grid cell (10cm). The current implementation has an accuracy of 1cm.

SmartSkin's sensor configuration also enables shape-based manipulation that does not explicitly use the hand's 2D position. A potential field created by sensor inputs is instead used to move objects. As the hand approaches the surface of the table, each intersection of the sensor grid measures the capacitance between itself and the hand. This field helps define various rules of object manipulation. For example, an object that descends to a lower potential area is repelled from the hand. The direction and speed of the object's motion can be controlled by changing the hand's position around the object.

In my lab's tests, many SmartSkin users were able to quickly learn to use the interface even though they did not fully understand its underlying dynamics. Many users used two hands or even their arms. For example, one can sweep the table surface with an arm to move a group of objects, and two arms can be used to trap and move objects, and (see Figure 2b).

Using the same sensing principle with a more dense grid antenna layout, SmartSkin determines the shape of a human hand (see Figure 2c and Figure 2d). The peak detection algorithm can also be used; in it, the algorithm, rather than tracking just one position of the hand, is able to track multiple positions of the fingertips.

An algorithm known as As-Rigid-As-Possible Shape Manipulation deforms objects with multiple control points [4]; Figure 2e shows its implementation in SmartSkin. Users manipulate graphical objects directly with multiple finger control points.

Back to Top

DiamondTouch

DiamondTouch [1] developed at Mitsubishi Electric Research Laboratories is another interactive table system based on capacitive sensing. Its unique feature is the ability to distinguish among multiple users. The grid-shaped antenna embedded in the DiamondTouch table transmits a time-modulated signal. Users sit in a special chair with a built-in signal-receiving electrode. When a user's finger touches the surface, a capacitive connection from the grid antenna to the signal-receiving chair is established through the user's body. The connection information is then used to determine the user's finger position on the surface, as well as the uniquely identified user manipulating the surface. Since the DiamondTouch table transmits a modulated signal, multiple users are able to operate the same surface simultaneously without the system losing track of the identity of any user. DiamondTouch also supports semi-multi-touch operation in which "semi" means (despite some ambiguity) the ability to detect multiple points. For instance, when a user touches two points—(100, 200) and (300, 400)—the system is unable to distinguish them from another two points—(100, 400) and (300, 200). However, performing simple multi-touch interactions (such as pinching, or controlling scale with the distance between two fingers), this ambiguity is not a problem.

Back to Top

PreSense: Touch- and Pressure-Sensing Interaction

Touch-sensing input [3] extends the mouse's usability by adding a touch sensor. While the buttons of a normal mouse have only two states (nonpress and press), the button in touch-sensing input provides three states (nontouch, touch, and press). The additional state allows more precise control of the system. For example, the toolbox of a GUI application automatically picks up more tools when a user moves a cursor to a toolbar region with a finger touch of the button.

Pressure is another useful input parameter for organic interaction. We intuitively use and control pressure for natural communication (such as when shaking hands). With a simple pressure sensor (such as a force-sensitive resister) embedded in a regular mouse or touchpad, the device easily senses finger pressure by measuring the pressure sensor's resister values.

PreSense [8] is a touch- and pressure-sensing input device that uses finger pressure, as well as finger position (see Figure 3). It consists of a capacitive touchpad, force-sensitive resister pressure sensor, and actuator for tactile feedback. It also recognizes finger contact by measuring the capacitive change on the touchpad surface. Combining pressure sensing and tactile feedback, it also emulates a variety of buttons (such as one-level and two-level) by setting thresholds to pressure parameters. For example, a user can "soft press" the target to select it and "hard press" it to display a pop-up menu.

Analog pressure sensing enables users to control continuous parameters (such as the scale of the displayed image). The finger contact area is used to distinguish between scaling directions (scale-up and scale-down). By changing the position of the finger slightly, one can control both zooming-in and zooming-out with one finger (see Figure 3b).

Pressure is useful for explicit parameter control (such as scaling) while offering the possibility of sensing the user's implicit or emotional state. When a user is, say, frustrated with the system, his or her mouse button pressure might change from the normal state, and the system would be able to react to that frustration.

Finger input with pressure, combined with tactile feedback, is the most common form of natural interaction. Like Shiatsu (Japanese finger-pressure therapy), users of PreSense feel and control the performance of computer systems directly.

Back to Top

Research Issues

Because organic user interfaces represent such a new and emerging research field, many related research challenges and issues require further study. In what follows, I outline four of them:

Interaction techniques for OUIs. GUIs have a long history and incorporate a large number of interaction techniques. When the mouse was invented by Douglas Englebart at Stanford Research Institute in 1963, it was used only to point at on-screen objects. Development of mouse-based interaction techniques (such as pop-up menus and scrollbars) followed. The current level of development of organic user interfaces is the equivalent of where the mouse was when first invented. For multi-touch interaction, only a simple set of techniques (such as zooming) has been introduced, though many more should be possible; the interaction techniques explored in [4] may be candidates.

Stone(Tool) vs. skin. It is also interesting and worthwhile to consider the similarities and differences between tangible UIs and organic UIs. Although these two types of UIs overlap in many ways, the conceptual differences are clear. Tangible UI systems often use multiple physical objects as tools for manipulation; each object is graspable so users are able to use physical manipulation. Because these objects often have a concrete meaning (called physical icons, or "phicons") in the application, many tangible systems are domain-specific (tuned for a particular application). For organic UI systems, users directly interact with possibly curved interactive surfaces (such as walls, tables, and electronic paper) with no intermediate objects. Interactions are more generic and less application-oriented. This situation may be compared to real-world interaction. In the real world, we use physical instruments (tools) to manipulate something but prefer direct contact for human-to-human communication (such as a handshake). Tangible UIs are more logical, or manipulation-oriented, whereas organic UIs are more emotional, or communication-oriented, though more real-world experience is needed for a rigorous comparison.

Other modalities for interaction. In organic UIs, hands are still the primary body parts for interaction. But we should be able to use other parts, as we do in our natural communications. Eye gaze is one possibility. Another is blowing, which is useful for manipulation because it is controllable while also conveying emotion during interaction; a technique developed in [6] determines the direction of a blow based on an acoustic analysis. The BYU-BYU-View system adds the sensation of air movement to the interaction between a user and a virtual environment to add reality for telecommunications by delivering information directly to the skin [9].


Even ceilings may someday function as an information display.


Back to Top

Interaction Between Real World and Computer

In the context of traditional human-computer interaction, the term "interaction" generally means information exchange between a human and a computer. In the near future, interaction will also involve more physical experience (such as illumination, air, temperature, humidity, and energy). The interaction concept is thus no longer limited to interaction between humans and computers but can be expanded to cover interaction between the physical world and computers. For example, future interactive wall systems will react to human gesture, be aware of the air in the room, and be able to stabilize conditions (such as temperature and humidity) in the same way a cell membrane maintains the stability of a cell environment. Interactive walls may also be able to control sound energy to dynamically create silent spaces. Even ceilings may someday function as an information display. In this way, future interactive systems may more seamlessly interact with and control our physical environments.

Back to Top

References

1. Dietz, P. and Leigh, D. DiamondTouch: A multiuser touch technology. In Proceedings of the 14th Annual ACM Symposium on User Interface Software and Technology (Orlando, FL, Nov. 11–14). ACM Press, New York, 2001, 219–226.

2. Han, J. Low-cost multitouch sensing through frustrated total internal reflection. In Proceedings of the 18th Annual ACM Symposium on User Interface Software and Technology (Seattle, Oct. 23–26). ACM Press, New York, 2005, 115–118.

3. Hinckley, K. and Sinclair, M. Touch-sensing input devices. In Proceedings of the ACM Conference on Computer-Human Interaction (Pittsburgh, PA, May 15–20). ACM Press, New York, 1999, 223–230.

4. Igarashi, T., Moscovich, T., and Hughes, J. As-Rigid-As-Possible Shape Manipulation. In Proceedings of the SIGGRAPH Conference (Los Angeles, July 31–Aug. 4). ACM Press, New York, 2005, 1134–1141.

5. Matsushita, N. and Rekimoto, J. HoloWall: Designing a finger, hand, dody, and object-sensitive wall. In Proceedings of ACM Symposium on User Interface Software (Banff, Alberta, Canada, Oct. 15–17). ACM Press, New York, 1997, 209–210.

6. Pateland, S. and Abowd, G. BLUI: Low-cost localized blowable user interfaces. In Proceedings of of the ACM Symposium on User Interface Software (Newport, RI, Oct. 7–10). ACM Press, New York, 2007.

7. Rekimoto, J. SmartSkin: An infrastructure for freehand manipulation on interactive surfaces. In Proceedings of the ACM Conference on Computer-Human Interaction (Minneapolis, MN, Apr. 20–25). ACM Press, New York, 2002, 113–120.

8. Rekimoto, J., Ishizawa, T., Schwesig, C., and Oba, H. PreSense: Interaction techniques for finger-sensing input devices. In Proceedings of the 16th Annual ACM Symposium on User Interface Software and Technology (Vancouver, BC, Canada, Nov. 2–5). ACM Press, New York, 2003, 203–212.

9. Sawada, E., Ida, S., Awaji, T., Morishita, K., Aruga, T., Takeichi, R., Fujii, T., Kimura, H., Nakamura, T., Furukawa, M., Shimizu, N., Tokiwa, T., Nii, H., Sugimoto, M., and Inami, M. BYU-BYU-View: A Wind Communication Interface. In the Emerging Technologies Exhibition at SIGGRAPH (San Diego, Aug. 5–9, 2007); www.siggraph.org/s2007/attendees/etech/4.html.

Back to Top

Author

Jun Rekimoto ([email protected]) is a professor in the Interfaculty Initiative in Information Studies at The University of Tokyo and a director of the Interaction Laboratory at Sony Computer Science Laboratories, Inc. in Tokyo.

Back to Top

Footnotes

DOI: http://doi.acm.org/10.1145/1349026.1349035

Back to Top

Figures

F1Figure 1. HoloWall interactive surface system [

F2Figure 2. SmartSkin, an interactive surface system based on capacitive sensing [

F3Figure 3. PreSense 2D input device enhanced with pressure sensors. Users add pressure to control analog parameters (such as scaling) and specify "positive" and "negative"?pressures by changing the size of the finger contact area on the touchpad surface. PreSense can be combined with tactile feedback to emulate a discrete button press with "click" sensation.

Back to Top

Tables

UT1Table. Traditional GUI and organic interaction compared.

Back to top


©2008 ACM  0001-0782/08/0600  $5.00

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2008 ACM, Inc.


 

No entries found