The DARPA challenges for autonomous driving have stimulated impressive technology demonstrations, showcased recent fundamental advances in artificial intelligence, and highlighted important directions for future research.
The following paper by Sebastian Thrun gives us a glimpse into the design and implementation of two winning DARPA grand challenge entries. Those robotic systems were enabled by advances that have been taking place in AI over the last 25 years in representation, learning, and control.
It is as important for robots, as for humans, to know what they don't know. Explicit representation of uncertainty in terms of probability, and incremental incorporation of information using Bayesian techniques, enables construction of systems that are robust and effective. Probabilistic estimation methods have long been used in relatively constrained applications, such as target tracking. Their extension to highly complex spaces, such as maps of surrounding obstacles and the intentions of other vehicles, is crucial to the success of intelligent systems embedded in the physical world. Such systems incorporate information from a variety of sensors; indeed, successful fusion of sensor information from inertial navigation sources, vision, and range sensing, was a crucial aspect of the robotic cars highlighted here. In the future, such systems will estimate abstract quantities, such as the goals and satisfaction level of their human drivers, and integrate information ranging from engine noise to weather reports to the content of human passengers' conversation into a comprehensive picture of the world around them.
Estimation techniques are ultimately built on models that tell us how perceptual inputs are related to the underlying state of the world and how future states of the world are related the previous ones. Traditionally, such models have been built by hand: expert systems models were specified with logical rules representing intuitive knowledge; and estimation systems were specified with linear-Gaussian distributions representing physical laws and the results of empirical tests. These models are either too brittle or too representationally limited to support intelligent robotics.
Machine learning has provided a methodology for acquiring models from data: these models can be highly non-linear, are grounded in real-world data, and can be updated incrementally as the surrounding world changes. This article provides us with a wonderful example of learning: it is difficult to make a visual detector for whether terrain can be driven on that works in all lighting conditions, terrain types, and so on. However, in any given conditions, it is apparently not too difficult to find a simple model that discriminates between drivable and non-drivable terrain. Thus, the Stanley system uses an online self-supervised learning system to continuously adapt a model that predicts drivability based on visual input. The self-supervised aspect is particularly interesting. The vehicle uses short-range laser sensing to label the terrain near to it as drivable; this data is used to learn a model that then allows visual prediction of drivability over much longer distances. Today, virtually any system operating on data from the real world—speech, vision, language—uses models learned from data to make robust estimates and predictions.
The systems described in this paper exhibit important steps toward the goal of truly integrated, robust, intelligent systems that interact with the physical world.
In the end, what matters is how the system behaves. The vehicle must behave robustly, selecting appropriate actions by taking into consideration the state of the environment, from road friction to pedestrians to the driver's desires, as well as its own uncertainty about the state. This article describes planning and control systems, operating at different levels of abstraction, each of which attempts to select actions that maximize expected utility: that is, that will perform well, on average, given the robot's uncertainty. In addition, actions may be chosen for the explicit purpose of reducing uncertainty, by aiming cameras in appropriate directions or driving to previously unexplored locations. Robustness also depends on time-sensitive decision-making: when an obstacle suddenly appears, the vehicle must stop immediately, without time to weigh all of its options in detail. The vehicles described here have multitiered control systems that ensure quick reaction to changing environmental conditions while allowing deeper deliberation for higher-level decisions.
Thrun describes systems that exhibit important steps toward the goal of truly integrated, robust, intelligent systems that interact with the physical world. They will motivate researchers, old and new, to push forward to generally intelligent and robust robots that manage ubiquitous uncertainty, human interaction, and complex, competing objectives to substantially improve the safety, enjoyability, and efficiency of transportation. These same fundamental techniques will, in addition, provide the basis for revolutionary advances in intelligent robots for homes, hospitals, and factories.
©2010 ACM 0001-0782/10/0400 $10.00
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2010 ACM, Inc.
No entries found