A basic limitation of today's artificial intelligence (AI) is that it largely mirrors human thinking. Since machine learning extracts information from the existing world, exploring alternative ways to think is difficult, if not impossible. "There is only one highly intelligent organism we know—humans—and that's all we have to work with," states Sebastian Nowozin, principal researcher at Microsoft Research Cambridge, U.K.
A handful of researchers hope to change all of this. They are exploring the concept of artificial life and artificial evolution. Essentially, they're creating synthetic beings—and brains—inside computers, and allowing these systems to evolve. "Artificial life…allows us to replay the tape of life over and over. It allows us to create different settings and see what happens differently each time," says Jeff Clune, assistant professor of computer science at the University of Wyoming.
Artificial life could help scientists develop autonomous "brains" that would guide space vehicles, robots, drones, and other devices, and allow these machines to adapt and adjust to conditions without any human input. In addition, robots and other devices could gain a level of agility that mimics biological life forms. As Clune puts it, "If we can create robots as agile and intelligent as natural life, the potential applications are limitless."
The idea of creating artificial life to study the real world dates back to the late 1980s, when computer scientist Christopher Langton began pondering how humans could gain insights beyond "life-as-we-know-it" by venturing into "life-as-it-could-be." Yet, only in recent years, with the introduction of more powerful computers and neural nets, has the concept gained traction.
Clune says artificial life research centers on a fundamental question: "What ingredients are necessary to create an open-ended evolutionary process?" Answering this question could lead to far more complex machines that would mimic qualities of animals as diverse as jaguars, hawks, and human beings. So far, his research team has used machine learning to demonstrate robots that could, when damaged, adapt like animals within a minute or two and continue with their mission.
Christoph Adami, a professor of microbiology and molecular genetics at Michigan State University, is also putting a microscope to artificial life. His research has focused on understanding how evolution produces complex things from simple things. More specifically, he studies whether the dynamics of evolution can be used to create an artificial brain that not only computes, but reasons and, perhaps, feels.
Adami's research revolves around the concept of Integrated Information Theory (IIT), which attempts to gauge consciousness in physical systems. In order to "evolve" brains, Adami creates artificial worlds inside the computer, with a population of genomes that encode brains. Researchers then transplant these brains onto virtual creatures that live in these artificial worlds, ask them to perform a specific task, and then study how the creatures learn and adapt over time—including when they encounter new and unforeseen circumstances.
These "evolved" artificial brains ultimately emerge as complex control circuits that become extremely efficient at performing certain tasks. Remarkably, they do not follow the design rules human engineers use. "We don't understand the brains that we create," Adami admits. "However, we can study them is unprecedented detail in an attempt to discover some of the principles that make them work as well as they do."
Ultimately, Adami hopes to produce the next generation of AI. The lab is now working with the U.S. National Aeronautics and Space Administration (NASA) to develop a brain for a proposed spacecraft that can mine the asteroid belt. Because ultra-small vehicles of this sort would lack a large-enough antenna to communicate with Earth regularly, humans will not be able to control them. "They require brains that can make decisions autonomously. Not only should they be able to recognize their environment, namely asteroid shapes, they need to be able to learn from experience, and engage with asteroid shapes it has never seen before," he says.
Artificial thinking revolves around a core concept: "If you want to test hypotheses about evolutionary processes, you have to be able to manipulate many different aspects. You have to understand and test the influence and history of adaption for many replicates," Adami explains. As a result, he is now pushing the boundaries of artificial life further through an emerging machine learning model known as Markov Network Brains (MNB).
The conceptual computing framework would bypass graphics-processing units (GPUs) that perform the same computing function in parallel, and instead use field-programmable gate arrays (FPGAs) that interact with one another while receiving input from sensors and other sources. "Using FPGA technology, we will be able to go from hundreds of neurons to millions of neurons without slowing the process significantly," he says. A Markov brain would operate in a true digital state; as a result, "We should be able to achieve much more realistic brain-like behavior."
Clune says the field of artificial life is poised to take off over the next few years. "The more computing power we have, the more interesting artificial life research becomes," he explains.
Nowozin points out that in addition to creating smarter autonomous machines, the field could help optimize business processes, produce smarter chatbots, and yield benefits in fields as diverse as medicine and gaming.
In the future, artificial intelligence will likely come from many sources, Nowozin says. "The ultimate question is whether there is a single type of intelligence that emerges under any substrate, or whether different substrates produce different outcomes. This research produces insights into the fundamental questions of life, and yet it has very practical applications for robotics and other systems."
Samuel Greengard is an author and journalist based in West Linn, OR.
No entries found