acm-header
Sign In

Communications of the ACM

Communications of the ACM

Artificial Intelligence: Past and Future


Communications Editor-in-Chief Moshe Y. Vardi

Chess fans remember many dramatic chess matches in the 20th century. I recall being transfixed by the 1972 interminable match between challenger Bobby Fischer and defending champion Boris Spassky for the World Chess Championship. The most dramatic chess match of the 20th century was, in my opinion, the May 1997 rematch between the IBM supercomputer Deep Blue and world champion Garry Kasparov, which Deep Blue won 3½–2½.

I was invited by IBM to attend the rematch. I flew to New York City to watch the first game, which Kasparov won. I was swayed by Kasparov's confidence and decided to go back to Houston, missing the dramatic second game, in which Kasparaov lost—both the game and his confidence.

While this victory of machine over man was considered by many a triumph for artificial intelligence (AI), John McCarthy (Sept. 4, 1927–Oct. 24, 2011), who not only was one of the founding pioneers of AI but also coined the very name of the field, was rather dismissive of this accomplishment. "The fixation of most computer chess work on success in tournament play has come at scientific cost," he argued. McCarthy was disappointed by the fact that the key to Deep Blue's success was its sheer compute power rather than a deep understanding, exhibited by expert chess players, of the game itself.

AI's next major milestone occurred last February with IBM's Watson program winning a "Jeopardy!" match against Brad Rutter, the biggest all-time money winner, and Ken Jennings, the record holder for the longest championship streak. This achievement was also dismissed by some. "Watson doesn't know it won on "Jeopardy!"," argued the philosopher John Searle, asserting that "IBM invented an ingenious program, not a computer that can think."

In fact, AI has been controversial from its early days. Many of its early pioneers overpromised. "Machines will be capable, within 20 years, of doing any work a man can do," wrote Herbert Simon in 1965. At the same time, AI's accomplishments tended to be underappreciated. "As soon as it works, no one calls it AI anymore," complained McCarthy. Yet it is recent worries about AI that indicate, I believe, how far AI as come.

In April 2000, Bill Joy, the technologists' technologist, wrote a "heretic" article entitled "Why the Future Doesn't Need Us" for Wired magazine, "Our most powerful 21st-century technologies—robotics, genetic engineering, and nanotech—are threatening to make humans an endangered species," he wrote. Joy's article was mostly ignored, but in August 2011 Jaron Lanier, another widely respected technologist, wrote about the impact of AI on the job market. In the not-too-far future, he predicted, it would just be inconceivable to put a person behind the wheel of a truck or a cab. "What do all those people do?" he asked.

Slate magazine ran a series of articles in September 2011 titled "Will Robots Steal Your Job?" According to writer Farhad Manjoo, who detailed the many jobs we can expect to see taken over by computers and robots in the coming years, "You're highly educated. You make a lot of money. You should still be afraid."

In fact, worries about the impact of technology on the job market are not only about the far, but also the not too far future. In a recent book, Race Against The Machine: How the Digital Revolution is Accelerating Innovation, Driving Productivity, and Irreversibly Transforming Employment and the Economy, by Erik Brynjolfsson and Andrew McAfee, the authors argue that "technological progress is accelerating innovation even as it leaves many types of workers behind." Indeed, over the past 30 years, as we saw the personal computer morph into tablets, smartphones, and cloud computing, we also saw income inequality grow worldwide. While the loss of millions of jobs over the past few years has been attributed to the Great Recession, whose end is not yet in sight, it now seems that technology-driven productivity growth is at least a major factor.

The fundamental question, I believe, is whether Herbert Simon was right, even if his timing was off, when he said "Machines will be capable ... of doing any work a man can do." While AI has been proven to be much more difficult than early pioneers believed, its inexorable progress over the past 50 years suggests that Simon may have been right. Bill Joy's question, therefore, deserves not to be ignored. Does the future need us?

Moshe Y. Vardi, EDITOR-IN-CHIEF


©2012 ACM  0001-0782/12/0100  $10.00

Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and full citation on the first page. Copyright for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or fee. Request permission to publish from [email protected] or fax (212) 869-0481.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2012 ACM, Inc.


Comments


Anonymous

human vs computer is abstraction vs brute force. In case of chess b.f. won. B.f. will goon improving, how about abstraction?

Regards
|=


Anonymous

While the question of whether machines will be capable of doing any work humans can do, certainly warrants some thought, the bigger question is going to be whether they SHOULD.

Everyone lists the agricultural industry as prime example of how technology has wiped out farmers by industrializing and scaling out food production. But more and more, people have come to realize that this has hurt more than helped. Animals are being inhumanely treated, the environment is being rampaged with fossil fuel waste and people do not realize the chemicals that they digest with all the herbicides/pesticides that are needed for large-scale farming. A few people are now advocating going back to roots: small-scale tech-free farming for a more viable future.

Another example is going to be the advent of Google's driverless car, which came out last year. Is it going to eliminate the driver's license, and the hundreds of jobs that go into some state's Department of Transport? Maybe. But if this technology is commercialized, the number of cars on the streets will probably double or triple, considering that insurance might be lowered, and far more people (such as those that are disabled) would be able to get a car. Do we really need more congested roads, and higher air pollution?

It will be a long time before machines will take over a large fraction of jobs, but eventually, these trends are going to indicate that we need to use AI more intelligently, and not just for feeding our capitalist greed.


Anonymous

"But if this technology is commercialized, the number of cars on the streets will probably double or triple, considering that insurance might be lowered, and far more people (such as those that are disabled) would be able to get a car. Do we really need more congested roads, and higher air pollution?"

Nonsense. Self-driving cars will enable more people to forgo personal ownership of a car. Cars will be able to come to you on demand. Opportunities to share a car will increase. And even if there are more cars on the road, which will happen over time with or without self-driving cars, traffic will flow more evenly. Even where there are now traffic lights, cars may be able to pass in both directions at once, with computer-controlled spacing.

And something like 30-40K people a year (just in the US) won't have to die in preventable accidents.


Anonymous

Cars? Why stick to 2D when there's a whole other dimension which we'll be enable us to travel as the crow flies when driver error is removed from the equation?

Icarus


Anonymous

maslows hierarchy of needs. soon we can do what we like rather than what we must. who would want the job copying figures from one ledger to another nowadays, yet there were complaints when it disappeared. who would want to drive a car for a living yuk. bring it on i say, and lets all move underground so we can keep the planets surface for fun things.


Zsanett Vasahlik

"Ars longa, vita brevis" - I am looking forward to it. To err is human. Perhaps one can think of an AI model that would help people make decisions in where use of AI will lead to more good than harm for the human race and our planet. So past mistakes like the overuse of machines in agriculture will less likely to happen. Many human decisions have caused much harm to humans themselves, most often unintentionally.


Bradley Mitchell

I remember, when I was child, reading a book, the title of which I've long since forgotten, that described the roles that robots and artificial intelligence would play in the future and how that would affect people. One of the more memorable chapters discussed how successive layers of high tech systems could be built on top of one another, each abstracting and amplifying functionality beneath it. It went on to say that if that system ever failed, it would be up to people with seemingly obsolete and rare skill sets to bring things back online.

In a similar way, it may be that computer scientists and engineers will still be needed in the future to fix these highly complex AI systems when they break down (as all things ultimately do).

In that sense, its just like our modern computers. We have all these fancy web applications and graphic user interfaces, but when everything breaks down (as it always does, even with Macs), I still need to know how to use things like vi or pico to get up and running again.

So I think there is good reason to think that Computer Science and Engineering jobs will still be around decades from now. How many of them will be around is another question though:)


Anonymous

can artificial intelligence think of the right decision with observing someone doing a wrong thing ?


CACM Administrator

The following letter was published in the Letters to the Editor of the March 2012 CACM (http://cacm.acm.org/magazines/2012/3/146236).
--CACM Administrator

Concerning Moshe Y. Vardi's Editor's Letter "Artificial Intelligence: Past and Future" (Jan. 2012), I'd like to add that AI won't replace human reasoning in the near future for reasons apparent from examining the context, or "meta," of AI. Computer programs (and hardware logic accelerators) do no more than follow rules and are nothing more than a sequence of rules. AI is nothing but logic, which is why John McCarthy said, "As soon as it works, no one calls it AI." Once it works, it stops being AI and becomes an algorithm.

One must focus on the context of AI to begin to address deeper questions: Who defines the problems to be solved? Who defines the rules by which problems are to be solved? Who defines the tests that prove or disprove the validity of the solution? Answer them, and you might begin to address whether the future still needs computer scientists. Such questions suggest that the difference between rules and intelligence is the difference between syntax and semantics.

Logic is syntax. The semantics are the "why," or the making of rules to solve why and the making of rules (as tests) that prove (or disprove) the solution. Semantics are conditional, and once why is transformed into "what," something important, as yet undefined, might disappear.

Perhaps the relationship between intelligence and AI is better understood through an analogy: Intelligence is sand for casting an object, and AI is what remains after the sand is removed. AI is evidence that intelligence was once present. Intelligence is the crucial but ephemeral scaffolding.

Some might prefer to be pessimistic about the future, as we are unable to, for example, eliminate all software bugs or provide total software security. We know the reasons, but like the difference between AI and intelligence, we still have difficulty explaining exactly what they are.

Robert Schaefer
Westford, MA


Displaying all 9 comments

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account
Article Contents: