Researchers at Israel's Ben-Gurion University of the Negev (BGU) and Tel Aviv University found that facial recognition (FR) systems may be thwarted by fabric face masks boasting adversarial patterns.
The researchers employed a gradient-based optimization process to generate a universal perturbation and mask to falsely classify each wearer as an unknown identity.
BGU's Alon Zolfi said, "The perturbation depends on the FR model it was used to attack, which means different patterns will be crafted depending on the different victim models."
Zolfi suggested FR models could see through masked face images by training them on images containing adversarial patterns, by teaching them to make predictions based only on the upper area of the face, or by training them to generate lower facial areas based on upper facial areas.
From Help Net Security
View Full Article
Abstracts Copyright © 2022 SmithBucklin, Washington, DC, USA
No entries found