acm-header
Sign In

Communications of the ACM

Viewpoint

Smart Machines Are Not a Threat to Humanity


Smart Machines Are Not a Threat to Humanity, illustration

Credit: Andrij Borys Associates / Shutterstock

Concerns have recently been widely expressed that artificial intelligence presents a threat to humanity. For instance, Stephen Hawking is quoted in Cellan-Jones1 as saying: "The development of full artificial intelligence could spell the end of the human race." Similar concerns have also been expressed by Elon Musk, Steve Wozniak, and others.

Such concerns have a long history. John von Neumann is quoted by Stanislaw Ulam8 as the first to use the term the singularitya—the point at which artificial intelligence exceeds human intelligence. Ray Kurzweil5 has predicted that the singularity will occur around 2045—a prediction based on Moore's Law as the time when machine speed and memory capacity will rival human capacity. I.J. Good has predicted that such super-intelligent machines will then build even more intelligent machines in an accelerating 'intelligence explosion.'4 The fear is that these super-intelligent machines will pose an existential threat to humanity, for example, keep humans as pets or kill us all10—or maybe humanity will just be a victim of evolution. (For additional information, see Dubhashi and Lappin's argument on page 39.)


Comments


CACM Administrator

The following letter was published in the Letters to the Editor in the March 2017 CACM (http://cacm.acm.org/magazines/2017/3/213824).
--CACM Administrator

The Viewpoints by Alan Bundy "Smart Machines Are Not a Threat to Humanity" and Devdatt Dubhashi and Shalom Lappin "AI Dangers: Imagined and Real" (both Feb. 2017) argued against the possibility of a near-term singularity wherein super-intelligent AIs exceed human capabilities and control. Both relied heavily on the lack of direct relevance of Moore's Law, noting raw computing power does not by itself lead to human-like intelligence. Bundy also emphasized the difference between a computer's efficiency in working an algorithm to solve a narrow, well-defined problem and human-like generalized problem-solving ability. Dubhashi and Lappin noted incremental progress in machine learning or better knowledge of a biological brain's wiring do not automatically lead to the "unanticipated spurts" of progress that characterize scientific breakthroughs.

These points are valid, but a more accurate characterization of the situation is that computer science may well be just one conceptual breakthrough away from being able to build an artificial general intelligence. The considerable progress already made in computing power, sensors, robotics, algorithms, and knowledge about biological systems will be brought to bear quickly once the architecture of "human-like" general intelligence is articulated. Will that be tomorrow or in 10 years? No one knows. But unless there is something about the architecture of human intelligence that is ultimately inaccessible to science, that architecture will be discovered. Study of the consequences is not premature.

Martin Smith
McLean, VA


Displaying 1 comment

Log in to Read the Full Article

Sign In

Sign in using your ACM Web Account username and password to access premium content if you are an ACM member, Communications subscriber or Digital Library subscriber.

Need Access?

Please select one of the options below for access to premium content and features.

Create a Web Account

If you are already an ACM member, Communications subscriber, or Digital Library subscriber, please set up a web account to access premium content on this site.

Join the ACM

Become a member to take full advantage of ACM's outstanding computing information resources, networking opportunities, and other benefits.
  

Subscribe to Communications of the ACM Magazine

Get full access to 50+ years of CACM content and receive the print version of the magazine monthly.

Purchase the Article

Non-members can purchase this article or a copy of the magazine in which it appears.