acm-header
Sign In

Communications of the ACM

Viewpoint

AI Dangers: Imagined and Real


AI Dangers: Imagined and Real, illustration

Credit: Mopic

In January 2015, a host of prominent figures in high tech and science and experts in artificial intelligence (AI) published a piece called "Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter," calling for research on the societal impacts of AI. Unfortunately, the media grossly distorted and hyped the original formulation into doomsday scenarios. Nonetheless, some thinkers do warn of serious dangers posed by AI, tacitly invoking the notion of a Technological Singularity (first suggested by Good8) to ground their fears. According to this idea, computational machines will improve in competence at an exponential rate. They will reach the point where they correct their own defects and program themselves to produce artificial superintelligent agents that far surpass human capabilities in virtually every cognitive domain. Such superintelligent machines could pose existential threats to humanity.

Recent techno-futurologists, such as Ray Kurzweil, posit the inevitability of superintelligent agents as the necessary result of the inexorable rate of progress in computational technology. They cite Moore's Law for the exponential growth in the power of computer chips as the analogical basis for this claim. As the rise in the processing and storage capacity of hardware and other technologies continues, so, they maintain, will the power of AI expand, soon reaching the singularity.


Comments


CACM Administrator

The following letter was published in the Letters to the Editor in the March 2017 CACM (http://cacm.acm.org/magazines/2017/3/213824).
--CACM Administrator

The viewpoints by Alan Bundy "Smart Machines Are Not a Threat to Humanity" and Devdatt Dubhashi and Shalom Lappin "AI Dangers: Imagined and Real" (both Feb. 2017) argued against the possibility of a near-term singularity wherein super-intelligent AIs exceed human capabilities and control. Both relied heavily on the lack of direct relevance of Moore's Law, noting raw computing power does not by itself lead to human-like intelligence. Bundy also emphasized the difference between a computer's efficiency in working an algorithm to solve a narrow, well-defined problem and human-like generalized problem-solving ability. Dubhashi and Lappin noted incremental progress in machine learning or better knowledge of a biological brain's wiring do not automatically lead to the "unanticipated spurts" of progress that characterize scientific breakthroughs.

These points are valid, but a more accurate characterization of the situation is that computer science may well be just one conceptual breakthrough away from being able to build an artificial general intelligence. The considerable progress already made in computing power, sensors, robotics, algorithms, and knowledge about biological systems will be brought to bear quickly once the architecture of "human-like" general intelligence is articulated. Will that be tomorrow or in 10 years? No one knows. But unless there is something about the architecture of human intelligence that is ultimately inaccessible to science, that architecture will be discovered. Study of the consequences is not premature.

Martin Smith
McLean, VA


Displaying 1 comment

Log in to Read the Full Article

Sign In

Sign in using your ACM Web Account username and password to access premium content if you are an ACM member, Communications subscriber or Digital Library subscriber.

Need Access?

Please select one of the options below for access to premium content and features.

Create a Web Account

If you are already an ACM member, Communications subscriber, or Digital Library subscriber, please set up a web account to access premium content on this site.

Join the ACM

Become a member to take full advantage of ACM's outstanding computing information resources, networking opportunities, and other benefits.
  

Subscribe to Communications of the ACM Magazine

Get full access to 50+ years of CACM content and receive the print version of the magazine monthly.

Purchase the Article

Non-members can purchase this article or a copy of the magazine in which it appears.
Sign In for Full Access
» Forgot Password? » Create an ACM Web Account