acm-header
Sign In

Communications of the ACM

ACM Opinion

The Danger of Anthropomorphic Language in Robotic AI Systems


View as: Print Mobile App Share:
 The Danger of Anthropomorphic Language in Robotic AI Systems

The average AI or robotic system is still far less complex than the average bacterium, so why does the average person have difficulty reasoning about what these systems can (and cannot) do? This difficulty arises primarily because of languagespecifically

When describing the behavior of robotic systems, we tend to rely on anthropomorphisms: cameras "see," decision algorithms "think," and classification systems "recognize." But the use of such terms creates expectations and assumptions that often do not hold, especially in the minds of people who have no training in the underlying technologies involved.

Designing, procuring, and evaluating artificial intelligence (AI) and robotic systems that are safe, effective, and behave in predictable ways represents a central challenge in contemporary AI. Using a systematic approach in choosing the language that describes these systems is the first step toward mitigating risks associated with unexamined assumptions about AI and robotic capabilities.

From Brookings TechStream
View Full Article


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account