The advancement of machine learning via neural networks is stoking concerns of artificial intelligence (AI) being used for mass manipulation and misinformation.
The advent of AI that generates deepfakes, or bogus images that look like the real thing, is compounding this threat, which will escalate as systems evolve to digest and learn from ever-larger datasets.
Also worrying are new computer chips for training neural networks and expanding AI, with likely milestones in language processing leading to conversational bots that could masquerade as humans and trick people into revealing sensitive personal data.
OpenAI's Jack Clark envisions governments building machine learning systems for the purpose of radicalizing populations in other countries, or forcing views onto their own citizens.
In an ideal scenario, AI could be used to identify and counter such threats, but a machine learning arms race is only likely to continue for the foreseeable future.
From The New York Times
View Full Article - May Require Paid Subscription
Abstracts Copyright © 2018 Information Inc., Bethesda, Maryland, USA
No entries found