acm-header
Sign In

Communications of the ACM

ACM TechNews

How to Stop Artificial Intelligence Being Biased


View as: Print Mobile App Share:
Bias in artificial intelligence can be hard to root out.

Researchers have developed a new method to avoid embedding bias into machine learning algorithms.

Credit: Getty Images

Niki Kilbertus and colleagues at the Max Planck Institute for Intelligent Systems in Germany have developed a new method to avoid embedding bias into machine-learning algorithms.

Their technique involves incorporating sensitive data in the training process while including an independent regulator and applying encryption mathematics.

When training the artificial intelligence (AI), an organization can use as much non-sensitive data as required, but both the organization and the regulator only receive sensitive data in an encrypted form. This is sufficient for the regulator to check whether the AI is making biased decisions that are shaped by anything inferred from non-sensitive data.

Once assured, the regulator can assign a fairness certificate to the organization, which has the benefit of making the regulator's knowledge of any of the AI's inner workings unnecessary, maintaining the confidentiality of trade secrets.

From New Scientist
View Full Article - May Require Paid Subscription

 

Abstracts Copyright © 2018 Information Inc., Bethesda, Maryland, USA


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account