acm-header
Sign In

Communications of the ACM

ACM News

ChatGPT Maker OpenAI Calls for AI Regulation, Warning of 'Existential Risk'


View as: Print Mobile App Share:
Visitors look at a booth for an AI-equipped chatbot at a trade show for AI tech companies in Tokyo last week.

The OpenAI leaders warn in their note against pausing development, adding that “it would be unintuitively risky and difficult to stop the creation of superintelligence."

Credit: Richard A. Brooks/AFP/Getty Images

The leaders of OpenAI, the creator of viral chatbot ChatGPT, are calling for the regulation of "superintelligence" and artificial intelligence systems, suggesting an equivalent to the world's nuclear watchdog would help reduce the "existential risk" posed by the technology.

In a statement published on the company website this week, co-founders Greg Brockman and Ilya Sutskever, as well as CEO Sam Altman, argued that an international regulator would eventually become necessary to "inspect systems, require audits, test for compliance with safety standards, (and) place restrictions on degrees of deployment and levels of security."

They made a comparison with nuclear energy as another example of a technology with the "possibility of existential risk," raising the need for an authority similar in nature to the International Atomic Energy Agency (IAEA), the world's nuclear watchdog.

Over the next decade, "it's conceivable that … AI systems will exceed expert skill level in most domains, and carry out as much productive activity as one of today's largest corporations," the OpenAI team wrote. "In terms of both potential upsides and downsides, superintelligence will be more powerful than other technologies humanity has had to contend with in the past. We can have a dramatically more prosperous future; but we have to manage risk to get there."

From The Washington Post
View Full Article

 


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account