In the eleventh century, St. Anselm of Canterbury proposed an argument for the existence of God that went roughly like this: God is, by definition, the greatest being that we can imagine; a God that doesn't exist is clearly not as great as a God that does exist; ergo, God must exist. This is known as the ontological argument, and there are enough people who find it convincing that it's still being discussed, nearly a thousand years later. Some critics of the ontological argument contend that it essentially defines a being into existence, and that that is not how definitions work.
God isn't the only being that people have tried to argue into existence. "Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever," the mathematician Irving John Good wrote, in 1965:
Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an "intelligence explosion," and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.
The idea of an intelligence explosion was revived in 1993, by the author and computer scientist Vernor Vinge, who called it "the singularity," and the idea has since achieved some popularity among technologists and philosophers. Books such as Nick Bostrom's "Superintelligence: Paths, Dangers, Strategies," Max Tegmark's "Life 3.0: Being Human in the age of Artificial Intelligence," and Stuart Russell's "Human Compatible: Artificial Intelligence and the Problem of Control" all describe scenarios of "recursive self-improvement," in which an artificial-intelligence program designs an improved version of itself repeatedly.
From The New Yorker
View Full Article
No entries found