Be cautious about the artificial intelligence approach to computer science. It is impossible to differentiate the actual achievement of AI from the degree to which people change when confronted with what is purported to be intelligent technology. We humans are vulnerable to bending over backward, sometimes making ourselves significantly stupider, in order to make an algorithm seem smart. A great many people in the U.S., as well as elsewhere, demonstrated this danger when they interacted foolishly with deeply flawed algorithms related to the credit and mortgage industries.
There is an even greater economic danger ahead as it relates to the idea of AI. If we are gullible enough to expect emergent large-scale intelligence to arise from the vast connections of the worldwide Internet, as has been proposed with increasing frequency in Communications and elsewhere, then we risk undermining the value we place on human labor and creativity. We might thus ruin the most successful design yet invented for the purpose of generating and preserving individual human dignity and libertycapitalism.
Those who believe in the imminent arrival of global AI (possibly emerging from the computing clouds) pretend that all the information we humans upload actually comes from some mysterious supernatural dimension. There's an economic component to the way we lie to ourselves to support this confusion. Millions of us anonymously upload our online offeringsthoughts, pictures, videos, links, votes, and more. Or, if not anonymously, we often express ourselves in such a fragmentary way, as with tweets, that there is no room left for personality. Under these circumstances we accept that we will not be paid for our acts of expression, as if we are engaged in a massive economic ritual to reify the falsehood that a global supernatural brain is speaking, instead of us.
The idea of creativity emerging autonomously from the computing clouds has the potential to ruin what might be called the endgame of basic technological development. Will technology good enough to provide comfort and security usher in a golden age for all? Or will we diverge into two species, one relatively lucky, the other relatively left out, as predicted by H.G. Wells in his novel The Time Machine in 1898?
The rarified beneficiaries might turn out to be the owners of the computing clouds, while the rest might be inundated with advertising. The bifurcation of humanity could be sustained only so long as those on the receiving end have money to spend. But as more things become free in order to support advertising, fewer of us will be making money. The dénouement would probably be some sort of violent swing toward socialism.
This might sound like an extreme scenario, but consider how much more difficult it is for certain creative people to earn a living today than they did before the public Internet became a global social phenomenon. The most tormented examples are probably recording musicians and investigative journalists.
Alas, it is now common to hear suggestions that people in this predicament should revert to retro (inevitably more physical) strategies of sustenance, like selling branded T-shirts and other merchandise. This is a sad reversal of what had been one of the brightest aspects of technological progress. Prior to the centrality of "open culture" and the rise of online collectivization, technological progress generally supported ever more cerebral, creative, and comfortable means of making a living.
Now extrapolate: How long will it be before cheap fabricating robots are able to download T-shirt designs from the cloud and automatically manufacture customized clothing as easily as one downloads music today? And how long after that will it be before personal robots are able to build copies of the latest medical implant or other gadgets from an online design? The answers are likely to be measured in decades, not centuries. If robotics is eventually good enough to harvest the garbage dumps of the world for materials and transform them into manufactured products, then a plateau will have been reached. At that time, all consumer technology will become media technology. Even those who hoped to make a living from T-shirts will join the investigative journalist and recording musician in poverty.
How far back in history toward the stone age will people have to devolve in order to find a way to make a living when fabricating robots are that good? Will people be forced by the market-place to work the fields, as academics did under various Maoist-type regimes? Not with good robots around. Surely, robots will eventually also do a better job tending the crops.
How long will it be before cheap fabricating robots are able to download T-shirt designs from the cloud and automatically manufacture customized clothing as easily as one downloads music today?
If you go back to some of the earliest thinking about how information technology might interact with the patterns of human life, you'll find examples of people who thought ahead to this potential dilemma. For instance, Ted Nelson, probably the first person to really think through how something like the Web might be built and how it would influence human society, proposed in the 1960s a design in which each copy of a file existed, from a logical point of view, in only one instance. Any user could make micropayments to gain access. The conflict between file sharing and DRM would be defused because there would be little motivation to make copies. Accessing files would be enticingly cheap, but everyone would make some incremental amount of money from sharing files with everyone else. A new social contract would emerge based on self-interest. This was not just a proposal to extend capitalism, but to broaden its benefits to a greater variety of people, since all would be able to upload interesting bits as needed.
A popular objection when Nelson proposed this design was that few people had anything of interest or value to say, and if they tried to say what they could, no one else would be interested. Fortunately, the rise of social networking has proved these objections unfounded.
I directly experienced a later period, in the 1970s and 1980s, when Nelson was no longer a solitary pioneer. Much of the underlying architecture and ideology that guides the public Internet today appeared in rough cut during those years. The ideas had shifted. Nelson was attacked by the campus left of the time over his willingness to imagine a future in which money continued to be important. Meanwhile, the culture of AI fascinated engineers, drawing their attention away from the problem of how to reward human creativity that had so fascinated Nelson.
We ended up with an Internet and Web that is, for the moment, a sort of cross between mass collective implementation of a Turing Test, through designs like Twitter, and the clumsy fantasy of armchair pseudo-Maoists. I realize these words could strike many as alarmist. If this is the case for you, please look into the history of collectivist design in human affairs. Such designs often appear enlightened at first, with a special way of enchanting idealistic young people. But they have also engendered the worst social disasters of the past century.
That's why I reject the idea that a collective or emergent intelligence is appearing through the computing clouds. We'll never know if it's really there, or if we have collectively become idiots.
©2009 ACM 0001-0782/09/0900 $10.00
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2009 ACM, Inc.
No entries found