acm-header
Sign In

Communications of the ACM

ACM Careers

Researcher to Investigate Overtrust of Autonomous Vehicles


View as: Print Mobile App Share:
autonomous vehicle overtrust, illustration

Credit: Getty Images

Overtrust frequently occurs with autonomous vehicles and robots — and it can have serious physical, and even fatal, consequences for humans in both the military and society, but Alan Wagner, assistant professor of aerospace engineering at Penn State, is investigating the factors that cause overtrust, and developing techniques that will allow autonomous systems to recognize it and prevent it, thanks to funding from the U.S. Air Force Office of Scientific Research (AFOSR).

Wagner, who is also affiliated with the Penn State Rock Ethics Institute, is the principal investigator on the three-year, $762,000 proposal titled "Developing Human Machine Systems That Actively Calibrate a User's Trust." Ayanna Howard, professor and Linda J. and Mark C. Smith Endowed Chair in Bioengineering in the School of Electrical and Computer Engineering at the Georgia Institute of Technology, is the co-PI.

"Many aspects of trust in autonomous systems are not well understood, and overtrust in them poses serious and numerous risks," says Wagner. "The tendency of humans to put too much faith in these systems is an important and significant problem that will directly impact the adoption and uses of autonomous systems, both by the military and civilians, in the near future."

Trust plays a critical role in many aspects of Air Force operations and U.S. Department of Defense missions. From the analysis of intelligence to the control of unmanned aerial vehicles, Air Force operators must not only trust the systems with which they will work, but the systems must strive to retain the operator's trust without contributing to overtrust by the operator. 

To better understand overtrust, Wagner will employ theoretical and experimental research methods in both virtual and live environments. The theoretical aspects of the project will contribute important conceptual insight toward the understanding of the social phenomena of trust itself. This scientific insight will provide the Air Force with the conceptual underpinnings necessary to examine many different scenarios and situations from the perspective of trust for future operations and missions.

The experimental aspects of the project will be relevant for the development of robots and automated systems with greater social awareness. This increased awareness will allow a machine to consider the impact of its actions on its relationship with a human, an operator's trust with the system, and other mission objectives before performing an action.

"Autonomous systems are being considered to automatically land military helicopters," says Wagner. "Such systems may help during landings in deserts because of the sand that kicks up and impairs pilot visibility. But we must be vigilant to prevent overtrust of such systems which could lead to situations in which pilots fail to monitor the landing process or eventually lose the skill to land altogether."

It is not currently clear whether the tendency for an operator to overtrust is related to individual-specific factors such as age, experience, personality, mood, or environmental factors such as location or situational risk; therefore, the research will investigate the potential precursors to overtrust in an attempt to identify those individuals and/or situations which are most likely to put people at risk of trusting an autonomous vehicle or robot too much.

Findings from the research could have an impact on military operations such as autonomous vehicles refueling fighter jets in flight, and quadcopters being used to deliver large, heavy payloads into hard-to-reach territories.

The results of the research could also affect the development and adoption of commercial aircraft that are fully autonomous or rely on autonomous systems for operation.

"Individuals often believe that a machine is smarter than they are — that it's unlikely to fail, will never fail, or that the people who created the machine would never have allowed it to fail — and that is very risky," says Wagner. "Most people trust the GPS systems in their cars, but how would they feel if they were riding in an autonomous air taxi or in a commercial aircraft that was being refueled in flight by an autonomous refueling vehicle?"

Wagner's research award is a follow-up to his previous Air Force Young Investigator Award that he received to develop a method to computationally represent and reason about trust. The goal of the Young Investigator Award was to support research into understanding how trust in autonomous systems develops and how to create methods to repair that trust. For this work, Wagner created robots that apologized when they made a mistake, or promised to do better. The research findings demonstrated that a robot's apologies and promises repaired trust but only when they occurred at the right time. This work is already helping to inform and shape the field of human-machine interaction in the aerospace industry and in other domains developing autonomous systems.


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account