A team of researchers at Princeton University, Microsoft, the nonprofit Algorand Foundation, and Technion-the Israeli Institute of Technology have developed an end-to-end framework for secure computation of artificial intelligence (AI) models and batch normalization.
The Falcon framework automatically aborts when it detects the presence of malicious actors, and it can outperform existing solutions by up to a factor of 200, according to the researchers.
Falcon assumes there are two types of users in a distributed AI usage scenario: data holders and query users.
Query users can submit queries to the system and receive answers based on the newly trained models, which gives the data holders' inputs privacy from the computing servers and the queries are kept secret.
The researchers conclude that, "[T]he sensitive nature of [certain] data demands deep learning frameworks that allow training on data aggregated from multiple entities while ensuring strong privacy and confidentiality guarantees."
From VentureBeat
View Full Article
Abstracts Copyright © 2020 SmithBucklin, Washington, DC, USA
No entries found