I am a big science fiction fan and robots have played a major role in some of my favorite speculative universes. The prototypical robot story came in the form of a play by Karel Čapek called "R.U.R." that stood for "Rossum's Universal Robots." Written in the 1920s, it envisaged android-like robots that were sentient and were created to serve humans. "Robot" came from the Russian word "работать" ("rabotat," which means "work"). Needless to say, the story does not come out well for the humans. In a more benign and very complex scenario, Isaac Asimov created a universe in which robots with "positronic" brains serve humans and are barred by the Three Laws of Robotics from harming humans:
A "zeroth" law emerges later:
In most formulations, robots have the ability to manipulate and affect the real world. Examples include robots that assemble cars (or at least parts of them). Less facile robots might be devices that fill cans with food or bottles with liquid and then close them up. The most primitive robots might not normally even be considered robots in normal parlance. One example is a temperature control for a home heating system that relies on a piece of bi-metal material that expands differentially causing a circuit to be closed or opened depending on the ambient temperature.
I would like to posit, however, that the notion of robot could usefully be expanded to include programs that perform functions, ingest input and produce output that has a perceptible effect. A weak notion along these lines might be simulations in which the real world remains unaffected. A more compelling example might be high-frequency stock trading systems whose actions have very real-world consequences in the financial sector. While nothing physical happens, real-world accounts are impacted and, in some cases, serious consequences emerge if the programs go out of control leading to rapid market excursions. Some market meltdowns have been attributed to large numbers of high-frequency trading programs all reacting in similar ways to inputs leading to rapid upward or downward motion of the stock market.
Following this line of reasoning, one might conclude that we should treat as robots any programs that can have real-world, if not physical, effect. I am not quite sure where I am heading with this except to suggest that those of us who live in and participate in creation of software-based "universes" might wisely give thought to the potential impact that our software might have on the real world. Establishing a sense of professional responsibility in the computing community might lead to increased safety and reliability of software products and services. This is not to suggest that today's programmers are somehow irresponsible but I suspect that we are not uniformly cognizant of the side effects of great dependence on software products and services that seems to increase daily.
A common theme I hear in many conversations is concern for the fragility or brittleness of our networked-and software-driven world. We rely deeply on software-based infrastructure and when it fails to function, there can be serious side effects. Like most infrastructure, we tend not to think about it at all until it does not work or is not available. Most of us do not lie awake worried that the power will go out (but, we do rely on some people who do worry about these things). When the power does go out, we suddenly become aware of the finiteness of battery power or the huge role that electricity plays in our daily lives. Mobile phones went out during Hurricane Sandy because the cell towers and base stations ran out of power either because of battery failure or because the back-up generators could not be supplied with fuel or could not run because they were underwater.
I believe it would be a contribution to our society to encourage deeper thinking about what we in the computing world produce, the tools we use to produce them, the resilience and reliability that these products exhibit and the risks that they may introduce. For decades now, Peter Neumann has labored in this space, documenting and researching the nature of risk and how it manifests in the software world. We would all do well to emulate his lead and to think whether it is possible that the three or four laws of robotics might motivate our own aspirations as creators in the endless universe of software and communications.
Vinton G. Cerf, ACM PRESIDENT
©2013 ACM 0001-0782/13/01
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and full citation on the first page. Copyright for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior specific permission and/or fee. Request permission to publish from [email protected] or fax (212) 869-0481.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2013 ACM, Inc.
This is a very important concern. An existing type of artificial agent that has major impact on our world is the corporation (for-profit, not-for-profit, governments, churches, unions, etc.). When an agent has a very different form, and operates at a very different scale of time and action, it can be difficult even to recognize its existence. I discuss this in a recent paper: http://web.eecs.umich.edu/~kuipers/research/pubs/Kuipers-ci-12.html.
An existing, ecologically-successful genus of collectively intelligent artificial creatures.
Benjamin Kuipers. Collective Intelligence (CI-2012).
Abstract: People sometimes worry about the Singularity [Vinge, 1995; Kurzweil, 2005], or about the world being taken over by artificially intelligent robots. I believe the risks of these are very small. However, few people recognize that we already share our world with artificial creatures that participate as intelligent agents in our society: corporations. Our planet is inhabited by two distinct kinds of intelligent beings --- individual humans and corporate entities --- whose natures and interests are intimately linked. To co-exist well, we need to find ways to define the rights and responsibilities of both individual humans and corporate entities, and to find ways to ensure that corporate entities behave as responsible members of society.
The last decade saw a surge of papers about the use of formal methods for dependability in architecture, and I think this is where we are heading. Assigning security, reliability and dependability properties to the architectures we create (even if in a very lightweight, cost-effective form) seems like the way to go.
In fact, the word "robot" comes from the Czech word "robota" (noun), which means "work" and in particular "serf labour". In a note for the Oxford English Dictionary, Karel apek described the origin of the word. It was suggested by his brother Josef.
The following letter was published in the Letters to the Editor in the April 2013 CACM (http://cacm.acm.org/magazines/2013/4/162502).
--CACM Administrator
I would like to add a bit of etymological history concerning the word "robot" to Vinton G. Cerf's President's Letter "What's a Robot?" (Jan. 2013). The Czech word "robota" shares a common root with the Russian "cacm5604_a.gif" ("rabota"), as well as with the German "Arbeit," dating to the Dark Ages of idealized German-Slavic unity in the forests of Eastern Europe. The word robota means forced labor and differs from "prce," which means any kind of work, including that of a free man, as well as creative work. Prce shares a common root with the Greek "cacm5604_b.gif" ("praxis"), inferring free human existence. The accepted wisdom as to the origin of the word robot says that when Karel apek, a mid-20th century Czech author (18901938), needed a special word for an artificial slave in his 1920 play R.U.R. (Rossum's Universal Robots), he turned to his brother Josef, who suggested the neologism "robot," deriving it from "robota." The apek brothers cooperated often, co-authoring several plays and books. Josef was a modern painter (a favorite of collectors today) and illustrated many of Karel's books, especially those for children. The brothers also embraced English culture and democracy. Karel died shortly after part of Czechoslovakia was annexed by Nazi Germany in 1938, and Josef was arrested by the Gestapo and died in the Bergen-Belsen concentration camp April 1945.
Ivan Ryant
Prague, Czech Republic
The following letter was published in the Letters to the Editor in the April 2013 CACM (http://cacm.acm.org/magazines/2013/4/162502).
--CACM Administrator
Vinton G. Cerf cited a common misconception that the word "robot" is derived from the Russian word "rabota" (work). The origin of robot is actually more subtle: Unlike Russian, which has only one word for work, the Czech language (the native language of Karel apek, who coined the term "robot") has two; the general term is "prce"; the second, "robota" (similar to the Russian word), means "forced labor," as in the labor of a servant. apek chose robota since his intent was for robots to be servants to humanity.
Both the Russian word rabota and the Czech word robota derive from the same Slavic word "rab" (slave), because in earlier times, work could be seen as so undignified, even shameful, no self-respecting noble person would do it. Later, when attitudes changed and work was seen as dignified (and it was shameful to be a non-working social parasite), the original word for work in Russian lost its shameful association, while Czech added a new word to describe the dignity of work.
Vladik Kreinovich
El Paso, TX
Displaying all 5 comments