Adversarial input can fool a machine-learning algorithm into misperceiving images.
Credit: Christian Szegedy et al.
Over the past five years, machine learning has blossomed from a promising but immature technology into one that can achieve close to human-level performance on a wide array of tasks. In the near future, it is likely to be incorporated into an increasing number of technologies that directly impact society, from self-driving cars to virtual assistants to facial-recognition software.
Yet machine learning also offers brand-new opportunities for hackers. Malicious inputs specially crafted by an adversary can "poison" a machine learning algorithm during its training period, or dupe it after it has been trained. While the creators of a machine learning algorithm usually benchmark its average performance carefully, it is unusual for them to consider how it performs against adversarial inputs, security researchers say.
No entries found