In January 2015, a host of prominent figures in high tech and science and experts in artificial intelligence (AI) published a piece called "Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter," calling for research on the societal impacts of AI. Unfortunately, the media grossly distorted and hyped the original formulation into doomsday scenarios. Nonetheless, some thinkers do warn of serious dangers posed by AI, tacitly invoking the notion of a Technological Singularity (first suggested by Good8) to ground their fears. According to this idea, computational machines will improve in competence at an exponential rate. They will reach the point where they correct their own defects and program themselves to produce artificial superintelligent agents that far surpass human capabilities in virtually every cognitive domain. Such superintelligent machines could pose existential threats to humanity.
Recent techno-futurologists, such as Ray Kurzweil, posit the inevitability of superintelligent agents as the necessary result of the inexorable rate of progress in computational technology. They cite Moore's Law for the exponential growth in the power of computer chips as the analogical basis for this claim. As the rise in the processing and storage capacity of hardware and other technologies continues, so, they maintain, will the power of AI expand, soon reaching the singularity.
The following letter was published in the Letters to the Editor in the March 2017 CACM (http://cacm.acm.org/magazines/2017/3/213824).
--CACM Administrator
The viewpoints by Alan Bundy "Smart Machines Are Not a Threat to Humanity" and Devdatt Dubhashi and Shalom Lappin "AI Dangers: Imagined and Real" (both Feb. 2017) argued against the possibility of a near-term singularity wherein super-intelligent AIs exceed human capabilities and control. Both relied heavily on the lack of direct relevance of Moore's Law, noting raw computing power does not by itself lead to human-like intelligence. Bundy also emphasized the difference between a computer's efficiency in working an algorithm to solve a narrow, well-defined problem and human-like generalized problem-solving ability. Dubhashi and Lappin noted incremental progress in machine learning or better knowledge of a biological brain's wiring do not automatically lead to the "unanticipated spurts" of progress that characterize scientific breakthroughs.
These points are valid, but a more accurate characterization of the situation is that computer science may well be just one conceptual breakthrough away from being able to build an artificial general intelligence. The considerable progress already made in computing power, sensors, robotics, algorithms, and knowledge about biological systems will be brought to bear quickly once the architecture of "human-like" general intelligence is articulated. Will that be tomorrow or in 10 years? No one knows. But unless there is something about the architecture of human intelligence that is ultimately inaccessible to science, that architecture will be discovered. Study of the consequences is not premature.
Martin Smith
McLean, VA
Displaying 1 comment