acm-header
Sign In

Communications of the ACM

ACM News

Watch Google’s Igor Markov Explain How to Avoid the AI Apocalypse


View as: Print Mobile App Share:
Google software engineer Igor Markov at The AI Conference in San Francisco.

Google software engineer and University of Michigan professor Igor Markov said an attack by artificial intelligence on humans would be sort of like when the Black Plague hit Europe in the 14th century, killing up to 50% of the population.

Credit: VentureBeat

An attack by artificial intelligence on humans, said Google software engineer and University of Michigan professor Igor Markov, would be sort of like when the Black Plague hit Europe in the 14th century, killing up to 50% of the population.

"Virus particles were very small and there were no microscopes or notion of infectious diseases, there was no explanation, so the disease spread for many years, killed a lot of people, and at the end no one understood what happened," he said. "This would be illustrative of what you might expect if a superintelligent AI would attack. You would not know precisely what's going on, there would be huge problems, and you would be almost helpless."

Rather than devising technological solutions, in a recent talk about how to keep superintelligent AI from harming humans, Markov looked to lessons from ancient history.

Markov joined sci-fi author David Brin and other influential names in the artificial intelligence community Friday at The AI Conference in San Francisco.

One lesson from early humans that could help in the fight against AI: make friends. Domesticate AI the same way Homo sapiens turned wolves into their protectors and friends.

 

From VentureBeat
View Full Article

 


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account