acm-header
Sign In

Communications of the ACM

Economic and business dimensions

AI Futures: Fact and Fantasy


circuitry above the horizon and city skyline below the horizon, illustration

Credit: Antiv / Shutterstock

"AlphaZero crushes chess!" scream the headlinesa as the AlphaZero algorithm developed by Google and DeepMind took just four hours of playing against itself (with no human help) to defeat the reigning World Computer Champion Stockfish by 28 wins to 0 in a 100-game match. Only four hours to recreate the chess knowledge of one and a half millennium of human creativity! This followed the announcement just weeks earlier that their program AlphaGoZero had, starting from scratch, with no human inputs at all, comprehensively beaten the previous version AlphaGo, which in turn had spectacularly beaten one of the world's top Go players, Lee Seedol, 4-1 in a match in Seoul, Korea, in March 2016.

Interest in AI has reached fever pitch in the popular imagination—its opportunities and its threats. The time is ripe for books on AI and what it holds for our future such as Life 3.0: Being Human in the Age of Artificial Intelligence by Max Tegmark, Android Dreams by Toby Walsh, and Artificial Intelligence by Melanie Mitchell.6,8,9 All three agree on the boundless possibilities of AI but there are also stark differences.

First, their styles reflect perhaps the personalities of their authors—on one side is Max Tegmark, a professor of physics at MIT who communicates with high-flying folksy flamboyance. His book is hardly about AI at all, save one chapter where he quickly compresses how "matter turns intelligent." For the rest, it ranges over vast magnitudes of space and time as befits a cosmologist: 10,000 years in one chapter, and if that is not enough, a billion years and "cosmic settlements" and "cosmic hierarchies" in the next.

On the other side are the books by Walsh and Mitchell who are more staid academics with feet firmly on the ground. Walsh and Mitchell are computer scientists who have worked in AI for a large part of their professional lives. Part 1 of Walsh's book gives a survey of how AI developed from the seminal paper of Alan Turing, through the early days of GOFAI ("Good Old Fashioned AI") to modern statistical machine learning and Deep Learning. This is a fairly accurate evolution of the discipline and the different "tribes" in it, a compressed version of the account in Pedro Domingo's Master Algorithm3 from a few years back. Part 2 and much of Mitchell's book is a panoramic survey of the present state of the art in AI in areas such as automated reasoning, robotics, computer vision, natural language processing, and games. Both discuss the AlphaGo program—its strengths and limitations. Walsh voices some skepticism about how general the technique is, conjecturing that it "would take a significant human effort to get AlphaGo to play even a game like chess well." The perils of forecasting—just a year later AlphaZero used the same principles to crush chess with no human input. Subsequently, AI has beaten humans at Poker and defeated a pair of professional Starcraft II players. While Waymo has opened limited taxi services, the excitement about autonomous driving has recently cooled off somewhat and full services are still a way off.

As to the future of AI, all three agree that in principle, superintelligence is possible: that machines could, in principle, become more intelligent than humans, as indeed Turing contemplated in his original paper from 1950. For Tegmark, there are no physical laws that are violated and for Walsh and Mitchell, there are no computational principles that preclude it.

When is this likely to happen? Here the books could not be more different. For Walsh and Mitchell and the Stanford 100-year study of AI,b this is today only a very distant possibility, several decades away if not more. Tegmark, on the other hand, seems to suggest this is just around the corner, and moreover that a sizeable number of AI researchers also think so. A poll conducted by Müller and Bostrom from the Future of Humanity Institute (FHI) is often cited as proof of this but subsequent polls that target a more informed group of researchers—namely those who had "made significant and sustained contribution to the field over several decades" came to very different conclusions. Another even more recent poll by the FHI group, this time targeting AI experts4 also came to somewhat more nuanced conclusions.

So, as AI advances rapidly, what are the future risks? Here again, they agree on a few things. All three are seized of the dangers of autonomous weapons and have devoted a lot of effort to lobbying AI researchers to sign a declaration against such weapons. All are cognizant of the threat of automation to jobs, though Tegmark mentions it only in passing in one section.

But for the most part, they are on totally different planes. As befitting a cosmologist, Tegmark is again thinking in grand terms: Life 3.0 is life that can (more rapidly) change both its software and hardware. This is the stuff of movies like The Matrix or Space Odyssey and here Tegmark displays his considerable talents at fantasizing: a whole chapter is devoted to various types of Matrix-like future scenarios, some featuring benign AIs, others malignant. Life 3.0 starts with a parable of a future with a HAL-like computer taking over the world. Perhaps a future career awaits Tegmark in the sci-fi movie industry. Tegmark also possesses great fund-raising talent—he has founded his own Future of Life Institute (FLI) devoted to these questions and secured a donation of $10 million from Elon Musk who also likes to indulge in such speculations. There is an entire chapter in the book about the drama surrounding an event organized to announce the institute and the grant. One can see the need for hyperbole in such projects, but it borders on irresponsible to claim, as Tegmark has done, that AI is a more imminent existential threat than climate change: while there are precise projections of time frames from climate science, the former are purely speculative.


Interest in AI has reached fever pitch in the popular imagination.


Noted roboticist Rodney Brooks has warned of the "seven deadly sins of AI predictions."2 In particular, he issues a warning about imagined future technology: "If it is far enough away from the technology we have and understand today, then we do not know its limitations. And if it becomes indistinguishable from magic, anything one says about it is no longer falsifiable."

Life 3.0 is guilty of several of these deadly sins.

Walsh and Mitchell have concerns with AI risks that are of a totally different sort. They are sceptics about superintelligence and outline their own arguments why it may never ever be possible. They are not worried about superintelligent machines but rather super stupid machines with their bugs and failures and how we are reposing faith in them. They are worried about systematic biases in AI systems and consequences for fairness when they are entrusted with decision making responsibilities. And they are concerned about the consequences of automation on jobs and the economy.

The AI community has started taking the risks of AI seriously and there are whole themes devoted to it in major AI conferences. These initiatives are closer to the concrete down-to-earth approach of Walsh and Mitchell. Another recent book, Human Compatible by Stuart Russell7 advocates a new research direction in human-machine interaction with control issues at the forefront. As a Google team wrote1: "one need not invoke … extreme scenarios to productively discuss accidents, and in fact doing so can lead to unnecessarily speculative discussions that lack precision … We believe it is usually most productive to frame accident risk in terms of practical (though often quite general) issues with modern ML techniques."

This harks back to the wise words of Francois Jacob:5 "The beginning of modern science can be dated from the time when such general questions as 'How was the Universe created' … 'What is the essence of Life' were replaced by more modest questions like 'How does a stone fall?' 'How does water flow in a tube?' … While asking very general questions leads to very limited answers, asking limited questions turned out to provide more and more general answers.

Back to Top

References

1. Amodei, D. et al. Concrete problems in AI safety. arxiv. 2016.

2. Brooks, R. The seven deadly sins of AI predictions. MIT Technology Review (Oct. 6, 2017).

3. Domingos, P. The Master Algorithm, Basic Books, 2015.

4. Grace, K. et al. When Will AI Exceed Human Performance: Evidence from the Experts. arxiv 2017.

5. Jacob, F. Evolution and tinkering. Science 196, 4295 (1977), 1161–1166.

6. Mitchell, M. Artificial Intelligence: A Guide for Thinking Humans, Farrar, Straus and Giroux, 2019.

7. Russell, S. Human Compatible: Artificial Intelligence and the Problem of Control, Viking, 2019.

8. Tegmark, M. Life 3.0: Being Human in the Age of Artificial Intelligence. Knopf, 2017.

9. Walsh, T. Android Dreams: The Past, the Present and the Future of Artificial Intelligence. C. Hurst & Co Publishers Ltd, 2017.

Back to Top

Author

Devdatt Dubhashi ([email protected]) is a professor in the Division of Data Science and AI Department of Computer Science and Engineering at Chalmers University, Sweden.

Back to Top

Footnotes

a. https://bit.ly/3yhaekI

b. https://stanford.io/2VjkIB7


Copyright held by authors.
Request permission to (re)publish from the owner/author

The Digital Library is published by the Association for Computing Machinery. Copyright © 2021 ACM, Inc.


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account
Article Contents: