acm-header
Sign In

Communications of the ACM

ACM News

How Generative Models Could Go Wrong


View as: Print Mobile App Share:

The most immediate risk is that large language models could amplify the sort of quotidian harms that can be perpetrated on the Internet today.

Credit: George Wylesol

In 1960, Norbert Wiener published a prescient essay. In it, the father of cybernetics worried about a world in which "machines learn" and "develop unforeseen strategies at rates that baffle their programmers." Such strategies, he thought, might involve actions that those programmers did not "really desire" and were instead "merely colourful imitation[s] of it." Wiener illustrated his point with the German poet Goethe's fable, "The Sorcerer's Apprentice", in which a trainee magician enchants a broom to fetch water to fill his master's bath. But the trainee is unable to stop the broom when its task is complete. It eventually brings so much water that it floods the room, having lacked the common sense to know when to stop.

The striking progress of modern artificial-intelligence (AI) research has seen Wiener's fears resurface. In August 2022, AI Impacts, an American research group, published a survey that asked more than 700 machine-learning researchers about their predictions for both progress in AI and the risks the technology might pose. The typical respondent reckoned there was a 5% probability of advanced AI causing an "extremely bad" outcome, such as human extinction. Fei-Fei Li, an AI luminary at Stanford University, talks of a "civilisational moment" for AI. Asked by an American TV network if AI could wipe out humanity, Geoff Hinton of the University of Toronto, another AI bigwig, replied that it was "not inconceivable."

From The Economist
View Full Article

 


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account