acm-header
Sign In

Communications of the ACM

ACM News

Legislating Against AI Bias


View as: Print Mobile App Share:
Considering the fairness of artifical intelligence-based systems.

Earlier this year, the New York City Council mandated the establishment of a task force to look at the social impact of Artificial Intelligence-based decision systems on the public.

Credit: Shutterstock

Algorithmic bias has been known to distort the making of decisions across all walks of life, but this could begin to change as a task force set up by the New York City Council investigates bias in systems operated by city agencies. The task force's remit is to produce recommendations on how information on city agencies' automated decision systems could be shared with the public, and how those agencies could address instances in which people profess harm by automated decisions.

The NYC task force is not the only starter on the long journey to address algorithmic bias and decipher how to avoid its real and potential harm. Canada, France, Germany, and the U.K. are among the jurisdictions that have declared an interest in ethical artificial intelligence (AI), and all are keen to set standards that could be adopted on a regional and international scale.

The New York City Council in January adopted as a City Charter rule "A Local Law in Relation to Automated Decision Systems used by Agencies," which mandated the establishment of a task force to look at the social impact of such systems on the public. Members were named to the Automated Decision Systems Task Force in May, and an introductory meeting followed.

Vincent Southerland, executive director at the Center on Race, Inequality, and the Law at the New York University School of Law, and a member of the task force, said, "If agencies were held to account, technology could help solve society's more pernicious problems, particularly problems of race."

Predictive policing and decisions on bail, sentencing, and parole are just some of the processes the criminal justice system has automated, but there are plenty more elsewhere. Meredith Whittaker, also a member of the task force and co-founder of the AI Now Institute, an interdisciplinary research center at New York University dedicated to understanding the social implications of AI, points to the health, education, and employment sectors where, for example, AI systems may match children to schools, or applicants to jobs.

An example of bias in the home healthcare sector she notes occurred in Arkansas, where an algorithm made drastic changes to home health services based on incorrect decisions of who needed how much care, a situation that could become an issue of life and death.

Whittaker says, "I was alarmed that AI systems developed in industry relied on sources of data that are not reliable or robust, and were implemented in the social domain." That drove Whittaker, who also is founder of Google's Open Research Group, to co-found AI Now with Microsoft Research principle researcher Kate Crawford. With racial bias already baked into many agency policies, Whittaker says, AI only amplifies the problem: "Industry is racing to develop and market AI systems, but there isn't a parallel track assessing the systems."

AI Now is researching where AI works, where it falls short, and where it causes actual harm. It is also working on an algorithmic impact assessment framework, which Whittaker suggests could be a preliminary option to demonstrate the transparency, oversight of, and accountability for, algorithms used in public systems.

She explains, "Many algorithms are integrated into the back end of systems. We have little visibility into these systems, how they are used, the data they rely on, their logic and processes, and their predictive capacity. AI Now wants to enumerate existing and potential systems to find evidence of bias, inaccuracy or just plain wackiness." The organization recently asked government agencies to make Algorithmic Impact Assessments of their AI systems, using a framework "designed to support affected communities and stakeholders as they seek to assess the claims made about these systems, and to determine where—or if—their use is acceptable."

While campaigners lobby for change, technology vendors are berated for selling 'black box' algorithmic systems that are neither accountable nor transparent. If the recommendations of the NYC task force are implemented, a more ethical approach may be required. IBM, an AI veteran and leader of a partnership across industry, academia, and communities that aims to produce guidelines on the ethical use of AI, is already on the case. Francesca Rossi, head of global ethics at the company, says, "IBM believes all algorithms should be explainable and if they are not, they should not be on the market."

Considering the consequences of automated decisions on the lives of citizens, Rossi applauds the NYC task force and suggests every government should have a similar organization analyzing issues of bias and accountability. Looking forward, "I see a future where AI systems will need some sort of audit and certification before they are deployed at scale in the community."

The problem of unjust decisions is not only about biased algorithms, but also the data they are fed. Whittaker describes data as "the $100-billion question." She explains, "Data always reflects the views of the person creating it. If data is created in the context of racial police policies, the question becomes: can the data be cleansed? I would be dubious."

Sutherland suggests random data could improve outcomes in the criminal justice system, but acknowledges there would still be some cases where police incorrectly contact individuals.

In the community, however, the concern is not about data, but about algorithms as systems of power and control, and where they intersect with society and economy on the ground. Sutherland says most people are shocked when they learn automated decisions are being made about important aspects of their lives. "Everyone wants to be treated as an individual in the context of life. The fact that people are judged based on datasets of what other individuals did at some point is troubling. There is a notion that people are being profiled."

In the U.K., the House of Lords Select Committee on AI in April published an report titled, AI in the UK: ready, willing and able?, which discusses ethics and AI and makes recommendations including a cross-sector AI Code that the report proposes could be adopted nationally and internationally. The committee suggests five principles for such a code:

  • AI should be developed for the common good and benefit of humanity.
  • AI should operate on principles of intelligibility and fairness.
  • AI should not be used to diminish the data rights or privacy of individuals, families, or communities.
  • All citizens should have the right to be educated to enable them to flourish mentally, emotionally, and economically alongside AI.
  • The autonomous power to hurt, destroy or deceive human beings should never be vested in AI.

Chairman of the committee Lord Timothy Clement-Jones said, "We want to avoid an AI winter. Public trust is crucial and we must not repeat prejudices of the past by using old data and bias. We need to create an ethical framework and robust audit methods to determine if there is bias in systems and see how algorithms came to particular decisions. In future, explainability should be built into algorithmic systems, and organizations that deploy algorithms should be alert to avoiding bias."

The U.K. government is expected to respond to the Select Committee report within the next few months. Clement-Jones expects a positive response, which he hopes will be followed by the formation of policy.

Cultural change and algorithmic transparency should help individuals understand how decisions are made about them and make systems accountable, but herein lies a danger of putting the burden of proof on citizens with little resource compared to the government. Conversely, putting the burden of proof on government creates concerns about rubber stamping, leading Sutherland to suggest a third-party organization may be required to voice the concerns of all involved.

Sarah Underwood is a technology writer based in Teddington, U.K.


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account