acm-header
Sign In

Communications of the ACM

ACM News

Self-Taught AI Shows Similarities to How the Brain Works


View as: Print Mobile App Share:

In recent work, computational models of the mammalian visual and auditory systems built using self-supervised learning models have shown a closer correspondence to brain function than their supervised-learning counterparts.

Credit: Señor Salme/Quanta Magazine

For a decade now, many of the most impressive artificial intelligence systems have been taught using a huge inventory of labeled data. An image might be labeled "tabby cat" or "tiger cat," for example, to "train" an artificial neural network to correctly distinguish a tabby from a tiger. The strategy has been both spectacularly successful and woefully deficient.

Such "supervised" training requires data laboriously labeled by humans, and the neural networks often take shortcuts, learning to associate the labels with minimal and sometimes superficial information. For example, a neural network might use the presence of grass to recognize a photo of a cow, because cows are typically photographed in fields.

"We are raising a generation of algorithms that are like undergrads [who] didn't come to class the whole semester and then the night before the final, they're cramming," said Alexei Efros, a computer scientist at the University of California, Berkeley. "They don't really learn the material, but they do well on the test."

From Quanta Magazine
View Full Article

 


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account