he probability of AI going off the rails and hurting people has increased considerably thanks to the explosion in use of generative AI technologies, such as ChatGPT. That, in turn, is making it necessary to regulate certain high-risk AI use cases in the United States, the authors of a new Association of Computing Machinery paper said last week.
Using a technology like ChatGPT to write a poem or to write smooth-sounding language when you know the content well and can error-check the results yourself is one thing, said Jeanna Matthews, one of the authors of the ACM Technology Policy Council's new paper, titled "Principles for the Development, Deployment, and Use of Generative AI Technologies."
"It's a completely different thing to expect the information you find there to be accurate in a situation where you are not capable of error-checking it for yourself," Matthews said. "Those are two very different use cases. And what we're saying is there should be limits and guidance on deployments and use. 'It's safe to use this for this purpose. It is not safe to use it for this purpose.'"
The new ACM paper, which you can access here, lays out eight principles that it recommends users follow when creating AI systems. The first four principles–regarding transparency; auditability and contestability; limiting environmental impact; and heightened security and policy–were borrowed from a previous paper published in October 2022.
From Datanami
View Full Article
No entries found