acm-header
Sign In

Communications of the ACM

ACM TechNews

Google's ML-Fairness-Gym Lets Researchers Study the Long-Term Effects of AI's Decisions


View as: Print Mobile App Share:
Images of the statue of Blind Justice.

Google researchers have developed a set of components for evaluating algorithmic fairness in simulated social environments.

Credit: Analytics India Magazine

Researchers at Google have developed a set of components for evaluating algorithmic fairness in simulated social environments.

ML-fairness-gym, published as open source on GitHub, can be used to study the long-term effects of automated systems by simulating decision-making using OpenAI's Gym framework.

Artificial intelligence (AI)-controlled programs interact with digital environments, choosing actions that affect the environment's state. Then, the environment reveals an observation that the agent uses to inform its next moves so the environment models the system and dynamics of a problem and the observations serve as data.

Actions informed by the output of AI systems can have effects that might influence their future input.

From VentureBeat
View Full Article

 

Abstracts Copyright © 2020 SmithBucklin, Washington, DC, USA


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account