Artificial intelligence (AI) technology has evolved into a system that uses machine learning, huge data sets, complex sensors, and new algorithms to complete specific tasks. The systems are designed to master these tasks in ways that humans never could.
In the 1980s, researchers began to concentrate on the kinds of skills computers were good at, building groups of systems that operated according to their own brand of reasoning. The researchers used probability-based algorithms to teach computers how humans completed a certain task, and then let the systems determine how to best emulate those behaviors. The researchers also used genetic algorithms, which analyze randomly generated chunks of code, to pick out the highest performing ones and splice them together to create new codes. As the process is repeated, a highly efficient program evolves, and these developments have led to a variety of AI systems. For example, the Massachusetts Institute of Technology's Rodney Brooks designed a six-legged insect-inspired robot that can navigate complicated terrain autonomously, while Google is working on a car outfitted with AI systems.
"If you told somebody in 1978, 'You're going to have this machine, and you'll be able to type a few words and instantly get all of the world's knowledge on that topic,' they would probably consider that to be AI," says Google cofounder Larry Page. "That seems routine now, but it's a really big deal."
From Wired
View Full Article
Abstracts Copyright © 2011 Information Inc., Bethesda, Maryland, USA
No entries found