acm-header
Sign In

Communications of the ACM

ACM News

Warning! AI Is Heading for a Cliff


View as: Print Mobile App Share:
Russell says were hurtling toward disaster.

Asked if the race to achieve superhuman artificial intelligence (AI) was inevitable, Stuart Russell, University of California Berkeley professor of computer science and a leading expert on AI, says yes.

Credit: Stephen Mccowage/flickr

Asked if the race to achieve superhuman artificial intelligence (AI) was inevitable, Stuart Russell, UC Berkeley professor of computer science and leading expert on AI, says yes.

"The idea of intelligent machines is kind of irresistible," he says, and the desire to make intelligent machines dates back thousands of years. Aristotle himself imagined a future in which "the plectrum could pluck itself" and "the loom could weave the cloth." But the stakes of this future are incredibly high. As Russell told his audience during a talk he gave in London in 2013, "Success would be the biggest event in human history … and perhaps the last event in human history."

For better or worse, we're drawing ever closer to that vision. Services like Google Maps and the recommendation engines that drive online shopping sites like Amazon may seem innocuous, but advanced versions of those same algorithms are enabling AI that is more nefarious. (Think doctored news videos and targeted political propaganda.)

AI devotees assure us that we will never be able to create machines with superhuman intelligence. But Russell, who runs Berkeley's Center for Human-Compatible Artificial Intelligence and wrote Artificial Intelligence: A Modern Approach, the standard text on the subject, says we're hurtling toward disaster. In his forthcoming book, Human Compatible: Artificial Intelligence and the Problem of Control, he compares AI optimists to the bus driver who, as he accelerates toward a cliff, assures the passengers they needn't worry—he'll run out of gas before they reach the precipice.

"I think this is just dishonest," Russell says. "I don't even believe that they believe it. It's just a defensive maneuver to avoid having to think about the direction that they're heading."

The problem isn't AI itself, but the way it's designed. Algorithms are inherently Machiavellian; they will use any means to achieve their objective. With the wrong objective, Russell says, the consequences can be disastrous. "It's bad engineering."

Proposing a solution to AI's fundamental "design error" is the goal of Professor Russell's new book, which comes out in October. In advance of publication, we sat down to discuss the state of AI and how we can avoid plunging off the edge.

This conversation has been edited for length and clarity.

You're hardly alone in sounding the alarm about artificial intelligence—I'm thinking of people like Elon Musk and Stephen Hawking. What's fueling these fears?

The main issue is: What happens when machines become sufficiently intelligent that they're difficult to control?

Anyone who's ever tried to keep an octopus will tell you that they're sufficiently smart that they're really hard to keep in one place. They find ways of escaping, they can open doors, they can squeeze under doors, they can find their way around—because they're smart. So if you make machines that are potentially more intelligent than us, then, a priori, it's far from obvious how to control those machines and how to avoid consequences that are negative for human beings. That's the nature of the problem.

You can draw an analogy to what would happen if a superior alien species landed on Earth. How would we control them? And the answer is: You wouldn't. We'd be toast. In order to not be toast, we have to take advantage of the fact that this is not an alien species, but this is something that we design. So how do we design machines that are going to be more intelligent and more powerful than us in such a way that they never have any power over us?

Elon Musk uses very colorful language. It's also true that Elon Musk and Stephen Hawking are not AI researchers. But I think to some extent that gives them a more objective view of this. They're not defensive about AI, because that's not their career. I think a lot of our researchers are defensive about it, and that causes them to try to come up with reasons not to pay attention to the risk.

 

From California Magazine
View Full Article

 


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account