A new metric developed by Massachusetts Institute of Technology (MIT) researchers allows a small amount of noise to be added to models to protect sensitive data while maintaining the model's accuracy.
An accompanying framework to the Probably Approximately Correct (PAC) Privacy metric automatically identifies the minimal amount of noise to add without having to know the model's inner workings.
PAC Privacy considers the difficulty for an adversary to reconstruct sensitive data after the addition of noise, and determines the optimal amount of noise based on entropy in the original data from the adversary's viewpoint.
It runs the user's machine learning training algorithm numerous times on different subsamplings of data, comparing the variance across all outputs to calculate how much noise must be added.
From MIT News
View Full Article
Abstracts Copyright © 2023 SmithBucklin, Washington, D.C., USA
No entries found