acm-header
Sign In

Communications of the ACM

ACM Opinion

We Shouldn't be Scared by 'Superintelligent A.I.'


View as: Print Mobile App Share:
Ccomputer scientist Stuart Russell believes that if were not careful in how we design artificial intelligence, we risk creating superintelligent machines whose objectives are not aligned with our own.

Current discussions of superhuman artificial intelligence are plagued by flawed intuitions about the nature of intelligence.

Credit: Sophia Foster-Dimino

Intelligent machines catastrophically misinterpreting human desires is a frequent trope in science fiction, perhaps used most memorably in Isaac Asimov's stories of robots that misconstrue the famous "three laws of robotics." The idea of artificial intelligence going awry resonates with human fears about technology. But current discussions of superhuman A.I. are plagued by flawed intuitions about the nature of intelligence.

We don't need to go back all the way to Isaac Asimov — there are plenty of recent examples of this kind of fear. Take a recent Op-Ed essay in The New York Times and a new book, "Human Compatible," by the computer scientist Stuart Russell. Russell believes that if we're not careful in how we design artificial intelligence, we risk creating "superintelligent" machines whose objectives are not adequately aligned with our own.

As one example of a misaligned objective, Russell asks, "What if a superintelligent climate control system, given the job of restoring carbon dioxide concentrations to preindustrial levels, believes the solution is to reduce the human population to zero?" He claims that "if we insert the wrong objective into the machine and it is more intelligent than us, we lose."

Russell's view expands on arguments of the philosopher Nick Bostrom, who defined A.I. superintelligence as "an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills." Bostrom and Russell envision a superintelligence with vast general abilities unlike today's best machines, which remain far below the level of humans in all but relatively narrow domains (such as playing chess or Go).

Bostrom, Russell, and other writers argue that even if there is just a small probability that such superintelligent machines will emerge in the foreseeable future, it would be an event of such magnitude and potential danger that we should start preparing for it now. In Bostrom's view, "a plausible default outcome of the creation of machine superintelligence is existential catastrophe." That is, humans would be toast.

These thinkers — let's call them the "superintelligentsia" — speculate that if machines were to attain general human intelligence, the machines would quickly become superintelligent. They speculate that a computer with general intelligence would be able to speedily read all existing books and documents, absorbing the totality of human knowledge. Likewise, the machine would be able to use its logical abilities to make discoveries that increase its cognitive power.

Such a machine, the speculation goes, would not be bounded by bothersome human limitations, such as slowness of thought, emotions, irrational biases and need for sleep. Instead, the machine would possess something like a "pure" intelligence without any of the cognitive shortcomings that limit humans.

The assumption seems to be that this A.I. could surpass the generality and flexibility of human intelligence while seamlessly retaining the speed, precision and programmability of a computer. This imagined machine would be far smarter than any human, far better at "general wisdom and social skills," but at the same time it would preserve unfettered access to all of its mechanical capabilities. And as Russell's example shows, it would lack humanlike common sense.

 

From The New York Times
View Full Article

 


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account