University of Louisville researcher Roman Yampolskiy and hacktivist Federico Pistono have developed a set of worst-case scenarios for a potential malevolent artificial intelligence (AI).
Anticipating as many negative outcomes as possible will help guard against disaster, according to Yampolskiy.
The researchers started studying the issue using strategies taken from the field of cybersecurity, and created a list of all the things that could go wrong, which should make it easier to test any safeguards that could eventually be put in place.
In one scenario, the researchers envision an AI system that unleashes a global propaganda war that sets governments and populations in opposition, feeding a "planetary chaos machine."
The work was paid for by a fund established by Elon Musk, who has described AI as humanity's "biggest existential threat."
Yampolskiy cites Microsoft's Twitter chatbot Tay as an example of how AI can quickly get out of control; soon after it launched, Tay went rogue and was tricked into spewing racist comments. Yampolskiy says the incident reveals the unpredictability of such systems.
University of Sheffield researcher Noel Sharkey agrees an approach to testing inspired by cybersecurity is a good idea for any system, especially autonomous weapons.
From New Scientist
View Full Article
Abstracts Copyright © 2016 Information Inc., Bethesda, Maryland, USA
No entries found