Research from Johns Hopkins University (JHU) suggests new ways to prevent artificial intelligence (AI) from being visually deceived.
Experiments by JHU’s Zhenglong Zhou and Chaz Firestone involved showing people a broad range of adversarial images and having them select which object AIs would wrongly claim to identify from a list of up to 48 options. Across six different image types, 81% to 98% of participants picked the correct wrong image at above-chance rates.
Auburn University's Anh Nguyen says this "suggests that humans are able to decipher these images in the same way as the poor victim machines do."
Nguyen believes people could help computers better cope with adversarial images better by training AIs to emulate human visual perception and use this model as a defense mechanism, sifting out anything that does not conform with what the human model sees.
From New Scientist
View Full Article - May Require Paid Subscription
Abstracts Copyright © 2018 Information Inc., Bethesda, Maryland, USA
No entries found