As machine learning has made its way into more and more areas of our lives, concerns about algorithmic bias have escalated. Machine learning models, which today facilitate decisions about everything from hiring and lending to medical diagnosis and criminal sentencing, may appear to be data-driven and impartial, at least to naïve users—but the typically opaque models are only as good the data they are trained on, and only as ethical as the value judgments embedded in the algorithms.
The burgeoning field of algorithmic fairness, part of the much broader field of responsible computing, is aiming to remedy the situation. For several years now, along with philosophers, legal scholars, and experts in other fields, computer scientists have been tackling the issue. As Stanford University computer science professor Omer Reingold likes to put it, "We are part of the problem, and we should be part of the solution."
No entries found