acm-header
Sign In

Communications of the ACM

ACM TechNews

The Unintended Consequences of Rationality


View as: Print Mobile App Share:
Harvard School of Engineering and Applied Sciences professor David C. Parkes.

Harvard School of Engineering and Applied Sciences professor David C. Parkes says rational models of economics may be applied to artificial intelligence.

Credit: Eliza Grinnell/Harvard Paulson School

In an interview, Harvard School of Engineering and Applied Sciences professor David C. Parkes contends rational models of economics are applicable to artificial intelligence (AI).

He notes, for example, the revelation principle--which theorizes that the design of economic institutions can be limited to those where it is in the best interest of participants to truthfully reveal their utility functions--may become more manifest in AI systems.

Still, Parkes acknowledges, "we don't believe that the AI will be fully rational or have unbounded abilities to solve problems. At some point you hit the intractability limit--things we know cannot be solved optimally--and at that point, there will be questions about the right way to model deviations from truly rational behavior."

Parkes notes rational AI systems could potentially make better property sale and purchase decisions than people, based on research to develop an AI to build a model of people's preferences via elicitation. He also notes an AI observing someone's behavior can start building a preference model through the process of inverse reinforcement learning.

Parkes says economic AIs must solve problems that are given complexity due to other system participants, and he warns rationality can lead to unintended results.

From Harvard University
View Full Article

 

Abstracts Copyright © 2015 Information Inc., Bethesda, Maryland, USA


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account