acm-header
Sign In

Communications of the ACM

ACM News

Congress Really Wants to Regulate A.I., But No One Seems to Know How


View as: Print Mobile App Share:

OpenAI CEO Sam Altman floated the idea of a new government agency tasked with licensing “powerful” A.I. models.

Credit: Win McNamee=/Getty

n February, 2019, OpenAI, a little-known artificial-intelligence company, announced that its large-language-model text generator, GPT-2, would not be released to the public "due to our concerns about malicious applications of the technology." Among the dangers, the company stated, was a potential for misleading news articles, online impersonation, and automating the production of abusive or faked social-media content and of spam and phishing content. As a consequence, Open AI proposed that "governments should consider expanding or commencing initiatives to more systematically monitor the societal impact and diffusion of AI technologies, and to measure the progression in the capabilities of such systems."

This week, four years after that warning, members of the Senate Judiciary Subcommittee on Privacy, Technology, and the Law met to discuss "Oversight of A.I.: Rules for Artificial Intelligence." As has been the case with other tech hearings on the Hill, this one came after a new technology with the capacity to fundamentally alter our social and political lives was already in circulation. Like many Americans, the lawmakers became concerned about the pitfalls of large-language-model artificial intelligence in March, when OpenAI released GPT-4, the latest and most polished iteration of its text generator. At the same time, the company added it to a chatbot it had launched in November that used GPT to answer questions in a conversational way, with a confidence that is not always warranted, because GPT has a tendency to make things up.

From The New Yorker
View Full Article

 


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account