acm-header
Sign In

Communications of the ACM

Kode vicious

The Chess Player Who Couldn't Pass the Salt


The Chess Player Who Couldn't Pass the Salt, illustration

Credit: Anton Khrupin

back to top 

Our company is looking at handing much of our analytics to a company that claims to use "Soft AI" to get answers to questions about the data we have collected via our online sales system. I have been asked by management to evaluate this solution, and throughout the evaluation all I can see is that this company has put a slick interface on top of a pretty standard set of analytical models. I think what they really mean to say is "Weak AI" and that they're using the term Soft so they can trademark it. What is the real difference between soft (or weak) AI and AI in general?

Feeling Artificially Dumb

Dear AD,

The topic of AI hits the news about every 10 to 20 years, whenever a new level of computing performance becomes so broadly deployed as to enable some new type of application. In the 1980s it was all about expert systems. Now we see advances in remote control (such as military drones) and statistical number crunching (search engines, voice menus, and the like).

The idea of artificial intelligence is no longer new, and, in fact, the thought that we would like to meet and interact with non-humans has existed in fiction for hundreds of years. Ideas about AI that have come out of the 20th century have some well-known sources—including the writings of Alan Turing and Isaac Asimov. Turing's scientific work generated the now-famous Turing test, by which a machine intelligence would be judged against a human one; and Asimov's fiction gave us the Three Laws of Robotics, ethical rules that were to be coded into the lowest-level software of robotic brains. The effects of the latter on modern culture, both technological and popular, are easy to gauge, since newspapers still discuss advances in computing with respect to the three laws. The Turing test is, of course, known to anyone involved in computing, perhaps better known than the halting problem (https://en.wikipedia.org/wiki/Halting_problem), much to the chagrin of those of us who deal with people wanting to write "compiler-checking compilers."

The problem inherent in almost all nonspecialist work in AI is that humans actually do not understand intelligence very well in the first place. Now, computer scientists often think they understand intelligence because they have so often been the "smart" kid, but that's got very little to do with understanding what intelligence actually is. In the absence of a clear understanding of how the human brain generates and evaluates ideas, which may or may not be a good basis for the concept of intelligence, we have introduced numerous proxies for intelligence, the first of which is game-playing behavior.

One of the early challenges in AI—and for the moment I am talking about AI in the large, not soft or weak or any other marketing buzzword—was to get a computer to play chess. Now, why would a bunch of computer scientists want to get a computer to play chess? Chess, like any other game, has a set of rules, and rules can be written in code. Chess is more complicated than many games, such as tic-tac-toe (a game that is used to demonstrate to another fictional computer in the 1983 film WarGames that nuclear war is unwinnable), and has a large enough set of potential moves that it is interesting from the standpoint of programming a winning set of moves or a strategy. When computer programs were first matched against human players in the late 1960s, the machines used were, by any modern concept, primitive and incapable of storing a large number of moves or strategies. It was not until 1996 that a computer, the specially built Deep Blue, beat a human Grandmaster at the game.

Since that time, hardware has continued its inexorable march toward larger memories, higher clock speeds, and now, more cores. It is now possible for a handheld computer, such as a cellphone, to beat a chess Grandmaster. We have had nearly 50 years of human/computer competition in the game of chess, but does this mean that any of those computers are intelligent? No, it does not—for two reasons. The first is that chess is not a test of intelligence; it is the test of a particular skill—the skill of playing chess. If I could beat a Grandmaster at chess and yet not be able to hand you the salt at the table when asked, would I be intelligent? The second reason is that thinking chess was a test of intelligence was based on a false cultural premise that brilliant chess players were brilliant minds, more gifted than those around them. Yes, many intelligent people excel at chess, but chess, or any other single skill, does not denote intelligence.

Shifting to our modern concepts of soft and hard AI—or weak and strong, or narrow and general—we are now simply reaping the benefits of 50 years of advancements in electronics, along with a small set of improvements in applying statistics to very large datasets. In fact, improvement in the tools that people think are AI is, in no small part, a result of the vast amount of data that it is now possible to store.

Papers on AI topics in the 1980s often postulated what "might be possible" once megabytes of storage were commonly available. The narrow AI systems we interact with today, such as Siri and other voice-recognition systems, are not intelligent—they cannot pass the salt—but they can pick out features in human voices and then use a search system, also based on stats run on large datasets, to somewhat simulate what happens when we ask another person a question. "Hey, what's that song that's playing?" Recognizing the words is done by running a lot of stats on acoustic models, and then running another algorithm to throw away the superfluous words ("Hey," "that," "that's") to get "What song playing?" This is not intelligence, but, as Arthur C. Clarke famously quipped, "Any sufficiently advanced science is indistinguishable from magic."

All of which is to say that KV is not surprised in the least that when you peek under the hood of "Soft AI," you find a system of statistics run on large datasets. Intelligence, artificial or otherwise, remains firmly in the domain of philosophers and, perhaps, psychologists. As computer scientists, we may have pretensions about the nature of intelligence, but any astute observer can see that there is a lot more work to do before we can have a robot pass us the salt, or tell us why we might or might not want to put it on our slugs before eating them for breakfast.

KV

q stamp of ACM QueueRelated articles
on queue.acm.org

Scaling in Games and Virtual Worlds
January 02, 2009
http://queue.acm.org/detail.cfm?id=1483105

A Conversation with Arthur Whitney
http://queue.acm.org/detail.cfm?id=1531242

Information Extraction
Andrew McCallum
http://queue.acm.org/detail.cfm?id=1105679

The Network Protocol Battle
Kode Vicious
http://queue.acm.org/detail.cfm?id=2090149

Back to Top

Author

George V. Neville-Neil ([email protected]) is the proprietor of Neville-Neil Consulting and co-chair of the ACM Queue editorial board. He works on networking and operating systems code for fun and profit, teaches courses on various programming-related subjects, and encourages your comments, quips, and code snips pertaining to his Communications column.


Copyright held by author.

The Digital Library is published by the Association for Computing Machinery. Copyright © 2017 ACM, Inc.


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account
Article Contents: