The goal of the merger of ubiquitous and wearable computing should be to provide "the right information to the right person at the right place at the right time." In order for ubiquitous computing to reach its potential, the average person should be able to take advantage of the information on or off the job. Even while at work, many people do not have desks and/or spend a large portion of their time away from a desk. Thus, mobile access is the gateway technology required to make information available at any place and at any time. In addition, the computing system should be aware of the user's context not only to be able to respond in an appropriate manner with respect to the user's cognitive and social state but also to anticipate needs of the user.
Table 1 briefly summarizes the past four decades of user interface evolution. While technologies show doubling of capability every few years, it takes more than a decade for a new user interface to become widely deployed. The extra time is required for working out technology bugs, reducing costs, and adapting applications to the new user interfaces. During the current decade speech recognition, position sensing, and eye tracking should be common inputs. In the future, stereographic audio and visual output will be coupled with 3D virtual reality information. In addition, heads-up projection displays should allow superposition of information onto the user's environment.
There is no Moore's Law for humans. Human evolution is a slow process and society-wide human adaptation takes substantial time. For example, the size and spacing between fingers has been essentially the same for approximately a millennium. Furthermore, humans have a finite and non-increasing capacity that limits the number of concurrent activities they can perform. Human effectiveness is reduced as humans try to multiplex more activities. Frequent interruptions require a refocusing of attention. After each refocus of attention a period of time is required to reestablish the context prior to the interruption. In addition, human short-term memory can hold seven plus or minus two (that is, from five to nine) chunks of information. With this limited capacity, today's systems can overwhelm users with data, leading to information overload. The challenge to human-computer interaction design is to use advances in technology to preserve human attention and to avoid information saturation.
The objective of wearable computer design is to merge the user's information space with his or her work space. The wearable computer should offer seamless integration of information processing tools with the existing work environment. To accomplish this, the wearable system must offer functionality in a natural and unobtrusive manner, allowing the user to dedicate all of his or her attention to the task at hand with no distraction provided by the system itself. Conventional methods of interaction, including keyboard, mouse, joystick, and monitor, all require some fixed physical relationship between user and device, which can considerably reduce the efficiency of the wearable system. The most recent research on wearable computing can be found at the International Symposium of Wearable Computing Web site: iswc.gatech.edu.
A three-tiered taxonomy based upon the time rate of change of data can be used to categorize mobile applications:
When combined with ubiquitous computing, wearable computers will provide access to the right information at the right place and at the right time. Distractions are even more of a problem in mobile environments than desktop environments, since the user is often preoccupied with walking, driving, or other essential real-world interactions. A ubiquitous computing environment that minimizes distraction must be context-aware [1]. Context-aware computing describes a situation in which a mobile computer is aware of its user's state and surroundings, and modifies its behavior based on this information. A user's context can be quite rich, consisting of attributes such as physical location, physiological state (such as body temperature and heart rate), emotional state (such as angry, distraught, or calm), personal history, daily behavioral patterns, and so on. If a human assistant were given such context, he or she would make decisions in a proactive fashion, anticipating user needs. In making these decisions, the assistant would typically not disturb the user at inopportune moments except in an emergency. The goal is to enable mobile computers to play an analogous role, exploiting context information to significantly reduce demands on human attention. Combined with inferences about users' intentions, context-aware computing would allow improvement in user-perceived network and application performance and reliability.
Context-aware applications are built upon fundamental services such as spatial and temporal awareness. Spatial awareness includes the relative and absolute position and orientation of a user. Temporal awareness includes the scheduled time of public and private events.
Consider the following example. Busy individuals often do not have time to browse their calendars, check for new email, or read bulletin boards. Context-aware agents can deliver relevant information to the user when it is needed. Appointments, urgent email, and interesting events on a public calendar are shown to the user when the user is not engaged in more important tasks. These proactive agents deliver information to the user instead of the user polling the relevant sources.
Three examples of first-generation context-aware agents include:
Society has historically evolved its tools and products into more portable, mobile, and wearable form factors. Wearable implies the use of the human body as a support environment for the object. Clocks, radios, and telephones are examples of this trend. Computers are undergoing a similar evolution. Simply shrinking computing tools from the desktop paradigm to a more portable scale does not take advantage of a whole new context of use. While it is possible to miniaturize keyboards, human evolution has not kept pace by shrinking our fingers. There are minimal sizes beyond which objects become difficult to manipulatethe human anatomy introduces minimal and maximal dimensions that define the shape of wearable objects. The mobile context also defines dynamic interactions. Attempting to position a pointer on an icon while moving can be tedious and frustrating.
Wearability is defined as the interaction between the human body and the wearable object. Dynamic wearability includes the human body in motion. Design for wearability considers the physical shape of objects and their active relationship with the human form. The effects of history and cultures, including topics such as clothing, costumes, protective wearables, and carried devices were explored in [2], which also studied physiology and biomechanics, and the movements of modern dancers and athletes. The authors of [2] drew upon their experiences with over two-dozen generations of wearable computers representing over 100 person-years of research, codifying the results into guidelines for designing wearable systems. These results are summarized in Table 2. By considering how the wearable product designer responded to these design guidelines in Table 2, the buyer can make a more informed purchase.
The long-term use of wearable computers at this point in time has an unknown physiological effect on the human body. As wearable systems become increasingly useful and are used for longer periods of time, it will be important to test their effect on the wearer's body.
Wearable computers are an attractive way to deliver a ubiquitous computing system's interface to a user, especially in non-office-building environments. The biggest challenges merging ubiquitous and wearable computing deal with fitting the computer to the human in terms of interface, cognitive model, contextual awareness, and adaptation to tasks being performed.
User Interface Models. What is the appropriate set of metaphors for providing mobile access to information (such as, what is the next "desktop" or "spreadsheet")? These metaphors typically require more than a decade to develop (the desktop metaphor began in the early 1970s at Xerox PARC; it took more than a decade before it was widely available to consumers). Extensive experimentation working with end-user applications will be required. Furthermore, there may be a set of metaphors, each tailored to a specific application or a specific information type.
Input/Output Modalities. While several modalities mimicking the input/output capabilities of the human brain have been the subject of computer science research for decades, the accuracy and ease of use (many current modalities require extensive training periods) are not yet acceptable. Inaccuracies produce user frustrations. In addition, most of these modalities require extensive computing resources that will not be available in low-weight, low-energy wearable computers. There is room for new, easy-to-use input devices such as the dial developed at Carnegie Mellon University for list-oriented applications.
Quick Interface Evaluation Methodology. Current approaches to evaluate a human-computer interface require elaborate procedures and involve scores of subjects. Such an evaluation may take months and is not appropriate for use during interface design. These evaluation techniques should especially focus on decreasing human errors and frustration.
Matched Capability with Applications. The current common belief is that technology should provide the highest performance capability. However, this capability is often unnecessary to complete an application and enhancements such as full-color graphics require substantial resources and may actually decrease ease of use by causing information overload for the user. Interface design and evaluation should focus on the most effective means for information access and resist the temptation to provide extra capabilities simply because they are available.
Context-Aware Applications. How do we develop social and cognitive models of applications? How do we integrate input from multiple sensors and map them into user social and cognitive states? How do we anticipate user needs? How do we interact with the user? These, plus many other questions, must be addressed before context-aware computing becomes possible.
1. Dey, A.K., Salber, D., and Abowd, G.D. Context-based infrastructure for smart environments. In Proceedings of the 1st International Workshop on Managing Interactions in Smart Environments (MANSE 99), Springer-Verlag, New York, 1999, 114129.
2. Gemperle, F., Kasabach, C., Stivoric, J., Bauer, M., and Martin, R. Design for wearability. In Proceedings of the Second International Symposium on Wearable Computers, IEEE Computer Society Press, 1998, 116122.
©2002 ACM 0002-0782/02/1200 $5.00
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2002 ACM, Inc.
No entries found