Although the emergent field of artificial intelligence (AI) safety studies the unintended consequences of malevolent AIs, University of Louisville researchers Federico Pistono and Roman Yampolskiy aim to correct an important oversight--how such AIs might be designed and the conditions in which they might come about.
One factor that could signify possible development of a malevolent AI system is an absence of global oversight boards, which could become more likely by having the development group downplay the significance of its work and the hazards it poses. "The strategy is to disseminate conflicting information that would create doubt in the public's imagination about the dangers and opportunities of artificial general intelligence research," Pistono and Yampolskiy say.
They note another telltale sign would be the existence of closed-source code underlying the AI system. Pistono and Yampolskiy are uncertain the open-sourcing of AI software is any safer, as it could give evildoers access to the software as well. Notable closed-source AI developments include Google DeepMind's Go-playing AI, but Google has provided little clarity on how its research is governed. Open source AI efforts include the OpenAI nonprofit to advance digital intelligence so it will benefit mankind without being restrained by the need for financial profitability.
A major shortcoming is the practice of cybersecurity for AI lags far behind that of other software in terms of refinement.
From Technology Review
View Full Article
Abstracts Copyright © 2016 Information Inc., Bethesda, Maryland, USA
No entries found