acm-header
Sign In

Communications of the ACM

BLOG@CACM

Scientists, Governments, and Corporations Urgently Need to Work Together to Mitigate AI Risk


View as: Print Mobile App Share:
Gary Marcus

Originally published on The Road to AI We Can Trust.

 

Regular readers of this Substack will know that Geoff Hinton and I disagree about a lot: I love symbols; he hates them. He thinks neural networks "understand" the world; I think they do not. He probably thinks we are closer to AGI (artificial general intelligence) than I do.

But we are both really, deeply worried about AI, and seem to be converging on a common idea about what to do about it.

Most of our concerns are shared. I've been writing with urgency about my concerns about the contributions of large language models misinformation, and my concerns about how bad actors might misuse AI, and in my essay AI risk ≠ AGI risk, argued that we should worry about both near-term and long-term risks.

In endorsing the "pause letter" (despite expressing some concerns about the details), I was saying we need to slow down, and to focus on the kind of research that the pause letter emphasized, viz work on making sure that AI systems would be trustworthy and reliable. (This was also the major thrust of my 2019 book with Ernest Davis, which was subtitled Building AI We Can Trust; the point of the book was that current approaches were not in fact getting us to such trust.)

Hinton has heretofore been fairly quiet about AI risk, aside from a hint at a recent CBS News interview in March, in which he said rather cryptically that it was "not inconceivable" that AI could wipe out humanity. In the last few days he left Google, and he spoke more freely with Cade Metz, in a must-read article at The New York Times. Metz reports that Hinton expressed worries about misinformation ("His immediate concern is that the Internet will be flooded with false photos, videos and text, and the average person will "not be able to know what is true anymore"), misuse of AI ("It is hard to see how you can prevent the bad actors from using it for bad things"), and the difficulty in controlling unpredictable machines ("he is worried that future versions of the technology pose a threat to humanity because they often learn unexpected behavior from the vast amounts of data they analyze").

I agree with every word. And independently made each of these points a little less than two weeks ago, when I spoke at TED. (Rumor has it that my talk will be released in the next couple weeks.)

The question is what we should do about it.

§

At TED, and in a companion op-ed that I co-wrote in The Economist, I urged for the formation of an International Agency for AI:

 

An archived version of this essay can be found here

We called for

the immediate development of a global, neutral, non-profit International Agency for ai (iaai), with guidance and buy-in from governments, large technology companies, non-profits, academia and society at large, aimed at collaboratively finding governance and technical solutions to promote safe, secure, and peaceful ai technologies

The thing that struck me the most about Hinton's interview is that he has converged on his own to a very similar place. Quoting Metz in the Times:

The best hope is for the world's leading scientists to collaborate on ways of controlling the technology. "I don't think they should scale this up more until they have understood whether they can control it," he said

Let's get on it.

§

I have spent all my time since TED gathering a crew of interested collaborators, speaking to various leaders in government, business, and science, and inviting community input. Philanthropists, we need your help.

Anyone who wants to help can reach out to me here.

 

Gary Marcus (@garymarcus), scientist, bestselling author, and entrepreneur, deeply concerned about current AI but really hoping that we might do better. He is the co-author of Rebooting AI and host of Humans versus Machines.


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account