Robots today are being moved toward applications beyond the structured environments of manufacturing plants, making their way into the everyday world of social, entertainment, and industrial environments inhabited by people. Here, we discuss the models, strategies, and algorithms associated with the basic capabilities needed for robots to work and cooperate with humans. In addition to the capabilities they bring to the physical robot, these models and algorithms, and more generally the body of developments in the overall field of robotics, represent a significant contribution to the virtual world. For example, haptic interaction with an accurate dynamic simulation provides special insight into the real-world behaviors of physical systems. The potential applications of this emerging technology include virtual prototyping, animation, surgery, robotics, cooperative design, and education.
The successful introduction of robotics into human environments will depend on the development of competent and practical systems that are dependable, safe, and easy to use. To work, cooperate, assist, and interact with humans, this new generation of robots needs mechanical structures that accommodate interaction with humans while fitting into our unstructured, sizable, and unpredictable environment.
Human-compatible robotic structures have to integrate mobility (legged or wheeled) and manipulation (preferably bi-manual), while providing the needed access to perception and monitoring (head cameras) [5, 11]. Such diverse requirements can be fulfilled only through rather complex mechanisms, posing challenges for algorithms in modeling, perception, programming, motion planning, and control.
As advances are made in the methodologies and techniques needed to address these challenges, it has become increasingly apparent that their effect goes beyond the physical robot. Models and algorithms in robotics provide the foundations for developing many of the application areas at the intersection of the physical and the virtual worlds. These are areas in which physical models are simulated and interacted with by both human users and robots, including virtual prototyping, haptics, molecular biology, training, games, collaborative work, and haptically augmented teleoperation [4, 9, 10]. A number of ongoing efforts in robotics have yielded significant advancements in these areas; here, we survey some of those we are pursuing in our laboratory.
The emerging applications in robotics share the requirement of having to simulate and control physical models with sufficient sophistication that they recreate a complicated, physically consistent world at sufficient speed to allow user interaction. An example is the haptic display of virtual environments, whereby a robotic device permits haptic interaction with a virtual environment. Promising applications for this technology include virtual prototyping, teleoperation, training, and games.
For haptic interaction to seem realistic to a user, the virtual object should exhibit the same simulated physical properties as the real object, including the dynamics of rigid and articulated bodies, along with such mutual influences as those created by the impact forces during contact. To resolve the physical constraints arising in these situations and simulate the dynamic behavior of complex objects in a cluttered environment, we have developed fast algorithms, some of which are discussed later in the article.
Both the virtual world and the real world can be populated by complex, articulated, and actuated mechanisms like humanoid robots. Creating motions for these mechanisms for the purpose of performing a task, imitating a user command specified in a haptically simulated world, or reacting to interaction with other objects is a difficult task. Our approach to whole-robot modeling and control applies equally to humanoid robots as to complex articulated bodies simulated in the virtual world.
In increasingly complex virtual and physical environments, robots and physical objects can exhibit autonomous behavior; rather than being limited to passive interaction by users, they themselves independently pursue an objective and initiate interactions with other objects or users.
A haptic device is a robot that, rather than manipulating the physical world in accordance with a user command, applies forces to a user's hand or finger in accordance with the forces resulting from the user's motion in a virtual environment. Following the user's input, for example, a virtual hand is moved in a virtual environment. As the virtual hand is subjected to forces resulting from contact with objects, the forces are applied through the haptic device to the user's hand or finger, enabling the haptic display of virtual environments.
A haptic device can also serve as an input device (see Figure 1). The user specifies a motion for the object held by the humanoid robot. The sequence of images (top) shows the resulting overall motion if the object is commanded to move from side to side. While the object is manipulated interactively, the figure adjusts its posture autonomously according to simple posture energies.
Haptic displays of virtual worlds can also be linked to the real world. This linkage corresponds to a transition from haptically controlling and simulating a virtual environment to remotely controlling a physical environment. Teleoperation with force feedback can then be augmented with the simulated environment to produce a haptically augmented reality in which the world model is updated through the monitoring and perception of the real world. This haptically augmented reality might, for example, assist a surgeon operating remotely by preventing the accidental penetration of critical tissue.
Haptics involves the special challenge of a computational requirement involving the real-time resolution of the dynamics and contact forces of the virtual environment. For example, the human sense of touch is extremely susceptible to slight vibrations. Therefore the required computations need to be performed fast enough to control the haptic device at rates of 1,000Hz. The collision-detection methods we are developing employ bounding sphere hierarchies to address complex environments, modeled by many thousands of polygons.
The fidelity of the haptic sensation and the robustness of the interaction with the environment are both achieved by using the concept of a virtual proxy to represent the user's physical probe or finger. The virtual proxy can be viewed as if it were connected to the user's finger by a stiff spring. The haptic device is used to generate the forces of the virtual spring that appear to the user as the constraint forces caused by contact with a real environment. This framework also allows haptic shading to smooth the edges in polygon data and display surface properties, such as friction and texture.
We developed a general framework [12] for the resolution of multi-contact between articulated multi-body systems in the context of operational space control for robots [7]. Using this framework, the dynamic relationships between all existing contact points can be described. These relationships are characterized by the masses as perceived at the contact points. A force exerted at a contact point, whether from a collision with another object or from interaction with a user, can be translated into forces at all related contact points. The necessary computations can be performed with an efficient recursive algorithm.
The contact space representation allows interaction between groups of dynamic systems to be described easily without having to examine the complex equations of motion of each individual system. A collision model can be developed with the same ease as if one were considering interaction only between simple bodies. Impact and contact forces between interacting bodies can then be solved efficiently.
For the simulation of articulated body dynamics moving in free space, our effort focuses on algorithms for robotic mechanisms with branching structures in order to address the complexities of the applications outlined earlier. For example, building on previous work in this area, we developed a recursive algorithm for computing the operational space dynamics and control of an n-joint branching, redundant, articulated robotic mechanism with m operational points [2, 3]. An operational point is a point on the robot at which a certain behavior is controlled; in most cases it represents an end effector. The computational complexity of this algorithm is O(n m + m3); existing symbolic methods require a computational effort of O(n3 + m3). Since m can be considered as a small constant in practice, the algorithm attains a linear time O(n) as the number of links increases.
We integrated this framework with our haptic rendering system [12] to provide a general environment for interactive haptic dynamic simulation. Figure 2 illustrates a virtual environment modeled with this system in which two humanoid figures are colliding and interacting with numerous objects in the environment.
For robots with human-like structures, tasks are not limited to the specification of the position and orientation of a single effector, or operational point. For these robots, task descriptions may involve combinations of coordinates associated with one or both arms, the head-camera, and/or the torso. The remaining freedom of motion is assigned to various criteria related to the robot's posture and its internal and environmental constraints in the form of posture energies.
Robot dynamics are conventionally described in terms of the robot joint motion. The operational space formulation [7] provides an effective framework for dynamic modeling and control of branching mechanisms in terms of their operational points. As a consequence, the desired behavior at an operational point can be described directly in terms of its motion, rather than in terms of the joints causing its motion. That is, the operational space framework allows the direct control of the task, implicitly accounting for the dynamics and kinematics associated with the manipulator.
A generalized torque/force relationship [7, 8] provides a decomposition of the total joint torque command acting on the robot into two dynamically decoupled command torque vectors: the torque corresponding to the task behavior and the torque affecting only posture behavior. The former is used to command motions at the operational points of the robot; the latter performs motion using the robot's redundant degrees of freedom without affecting task behavior. This framework extends easily to robots with branching structures of m effectors.
Dynamic consistency is the essential property for the task behavior to maintain its responsiveness and to be dynamically decoupled from the posture behavior. Dynamic consistency enables task behavior and posture behavior to be specified independently of each other, providing intuitive control of complex systems. It is possible, for example, to specify a task at the end-effector of a robot while imposing posture constraints that maintain the robot's overall balance. The resulting behavior is task execution at the operational points, while redundant degrees of freedom are used to maintain balance constraints to prevent tipping. More complex posture behaviors can be obtained through a combination of simpler behaviors. We are currently exploring the generation of human-like natural motion from the motion capture of humans and the extraction of motion characteristics using human biomechanical models.
In addition to control methods to generate task and posture behavior for a single robot, effective cooperation strategies are needed for both the cooperation of multiple robots and the interaction between robots and humans. Several cooperative robots may, for instance, support a load while being guided by a human user to an attachment or visually following the guide to a destination. Our approach to these problems is based on the integration of virtual linkage [9] and the augmented object model. Virtual linkage characterizes internal forces; the augmented object describes the system's closed-chain dynamics. This approach has been implemented successfully on the Stanford robotic platforms for cooperative manipulation and human-guided motions.
A robotic system must be capable of a sufficient level of competence to avoid obstacles during motion. Even when a path is provided by a human or automatic planner, sensor uncertainties and unexpected obstacles can make the motion impossible to complete. Our research on the artificial potential field method [6] addresses this problem at the control level to provide efficient real-time collision avoidance. However, due to its local nature, reactive methods are limited in their ability to deal with complex environments. Our investigation of a framework to integrate real-time collision avoidance capabilities with a global collision-free path has resulted in the elastic strip approach [1] combining the benefits of global planning and reactive systems in the execution of motion tasks.
The elastic strip framework modifies a specified path in real time to accommodate potential interaction with other robots and objects in the environment. This resulting modified path enables goal-directed motion in environments that change unpredictably. The fact that the entire path is modified avoids the problem of local minima exhibited by purely local methods. In order to satisfy the real-time requirements of the targeted applications, the efficiency of the elastic strip framework is of great importance. Real-time performance is achieved through efficient free-space computation and representation techniques.
The elastic strip framework exploits the decomposition of a robot's motion into task and posture to enable task-consistent real-time path modification. This modification allows robots to perform desired behavior without interrupting task execution. The robot's overall behavior can consist of a combination of simple behaviors, such as maintaining a desired posture and avoiding collisions with obstacles. When the physical limitations of the robot render inconsistent the simultaneous execution of a task and additional behavior, task execution can be suspended automatically, then resumed when task-consistency is achieved again.
Figure 3 shows an example of a real-time path modification in interaction with the environment, whereby a skiing humanoid avoids a moving snowman and crouches under the finish banner. The humanoid avoids the snowman by performing a detour, but also by moving the ski poles closer to its body. The skier's crouching behavior as it passes under the banner is the result of posture control. The ability to combine task execution with obstacle avoidance and posture behavior, as well as the ability to suspend and resume tasks, provide an important foundation for complex mechanical systems to operate autonomously in both the virtual and the real worlds.
Advances toward robotic applications in human environments depend on developing the basic capabilities needed for both autonomous operations and human-robot interaction. We have presented methodologies for whole-robot coordination and control, cooperation among multiple robots, interactive haptic simulation with contact, dynamic simulation, and real-time modification of collision-free paths to accommodate changes in the environment. These developments provide some of the basic foundations in the effort to create robots for human environments. The real-time capability shared by these developments is a key characteristic in providing the computational tools needed for applications at the intersection of the physical and the virtual worlds.
1. Brock, O. and Khatib, O. Elastic strips: Real-time path modification for mobile manipulation. In Proceedings of the International Symposium of Robotics Research (1997), 117122.
2. Chang, K.-S. and Khatib, O. Operation space dynamics: Efficient algorithms for modeling and control of branching mechanisms. In Proceedings of the International Conference on Robotics and Automation (2000), 850856.
3. Featherstone, R. Robot Dynamic Algorithms. Kluwer Academic Publishers, 1987.
4. Finn, P. and Kavraki, L. Computational approaches to drug design. Algorithmica 25 (1999), 347371.
5. Hirai, K., Hirose, M., Haikawa, Y., and Takenaka, T. The development of the Honda Humanoid Robot. In Proceedings of the International Conference on Robotics and Automation (1998), 13211326.
6. Khatib, O. Real-time obstacle avoidance for manipulators and mobile robots. Int. J. Robot. Res. 5, 1 (1986), 9098.
7. Khatib, O. A unified approach to motion and force control of robot manipulators: The operational space formulation. Int. J. Robot. Automa. 3, 1 (1987), 4353.
8. Khatib, O. Inertial properties in robotics manipulation: An object-level framework. Int. J. Robot. Res. 14, 1 (1995), 1936.
9. Khatib, O., Yokoi, K., Chang, K.-S., Ruspini, D., Holmberg, R., and Casal, A. Coordination and decentralized cooperation of multiple mobile manipulators. J. Robot. Syst. 13, 11 (1996), 755764.
10. Latombe, J.-C. Motion planning: A journey of robots, molecules, digital actors, and other artifacts. Int. J. Robot. Res. 18, 11 (1999), 11191128.
11. Nishiwaki, K., Sugihara, T., Kagami, S., Kanehiro, F., Inaba, M., and Inoue, H. Design and development of research platform for perception and integration in humanoid robot: H6. In Proceedings of the International Conference on Intelligent Robots and Systems (2000), 15591564.
12. Ruspini, D. and O. Khatib. Collision/contact models for dynamic simulation and haptic interaction. In Proceedings of the International Symposium on Robotics Research (1999), 185195.
Figure. Acrobatic robot ski jumper. The Simulation & Active Interfaces software environment combines rigid object simulation, 3D graphics rendering, and robotic controllers to produce an environment for simulating and interacting in real time with complex robots and sophisticated environments (developed by the Stanford Robotics Laboratory).
Figure 1. A user commands a virtual robot to move the manipulated object using a haptic device. The device permits the user to interact and control the task, while the humanoid robot autonomously adjusts its posture according to simple posture energies.
Figure 2. The sequence shows the complicated dynamic interaction between two humanoid figures and a number of objects. The many contacts between objects and their resulting motions are computed interactively in accordance with their dynamic properties.
Figure 3. The trajectory of the humanoid robot is modified in real time as the snowman moves into its path and the finish banner is lowered. Note how the robot's ski poles are moved closer to its body to help avoid the snowman and how it maintains a natural posture while passing under the banner.
©2002 ACM 0002-0782/02/0300 $5.00
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2002 ACM, Inc.
No entries found