acm-header
Sign In

Communications of the ACM

News

Algorithmic Poverty


user, UX, and AI icons, illustration

Credit: Shutterstock

"Life isn't fair" is perhaps one of the most frequently repeated philosophical statements passed down from generation to generation. In a world increasingly dominated by data, however, groups of people that have already been dealt an unfair hand may see themselves further disadvantaged through the use of algorithms to determine whether or not they qualify for employment, housing, or credit, among other basic needs for survival. In the past few years, more attention has been paid to algorithmic bias, but there is still debate about both what can be done to address the issue, as well as what should be done.

The use of an algorithm is not at issue; algorithms are essentially a set of instructions on how to complete a problem or task. Yet the lack of transparency surrounding the data and how it is weighed and used for decision making is a key concern, particularly when the algorithm's use may impact people in significant ways, often with no explanation as to why they have been deemed unqualified or unsuitable for a product, service, or opportunity.


Comments


Joseph Bedard

This is a very good article. Thank you for writing it. I would like to add a few things.

The draft Guidance for Regulation of Artificial Intelligence Applications includes would require federal agencies to consider "issues of fairness and non-discrimination with respect to outcomes and decisions produced by the AI application at issue, as well as whether the AI application at issue may reduce levels of unlawful, unfair, or otherwise unintended discrimination as compared to existing processes." In my opinion such regulation is unnecessary when companies already have incentive to avoid litigation under the Equal Credit Opportunity Act and the Fair Housing Act, which prohibit discrimination based on race, religion, nationality, etc. Companies can't simply blame an algorithm to prove themselves innocent. I'm not aware of any case where a company has used an algorithm as a get out of jail card. If there have been such cases or there is a compelling argument (supported by peer-reviewed empirical data) that those laws are not effective, then we could have a conversation about what additional laws are needed.

The article says, In the absence of laws or standards, companies may need to take the lead in assessing and modifying their algorithms ... For the sake of clarification, there is not an absence of laws or legal standards, as stated above.

I wholeheartedly agree that we should search for options to improve the lives of marginalized people in a way that does not unfairly degrade the lives of others. Innovative companies working on DeFi (decentralized finance) and crypto-currencies are doing exactly that. They are in the process of disrupting the existing financial services industry as you read this.


Keith Kirkpatrick

Thanks you for reading, and thank you for mentioning the draft Guiance for Regulation of Artificial Intelligence Applications. When it comes to fairness and discrimination, it's often a combination of solutions (regulation, pressure from marginalized groups, and market forces) that is most impactful in effecting change.


Joseph Bedard

I agree that pressure from and representation of marginalized groups is an important market force. Many companies have adapted to such opportunities and provided service to those marginalized communities. However, the fact that regulation has often affected positive change does not mean that regulation will always affect positive change in all situations.

There are undoubtedly situations where regulation is necessary. Clean air and water are good examples in which there are limited and shared resources that are susceptible to an economic tragedy of the commons. However, there are also situations where regulatory commissions are detrimental.

For example, the Interstate Commerce Commission (ICC) was originally formed to regulate railroads. After the public lost interest, the railroad industry gradually lobbied for favorable regulations and favorable appointments to the commission as years passed. When the trucking industry began to disrupt the railroad industry (providing lower cost shipping), the ICC gained broader authority under the Motor Carrier Act of 1935. The ICC (corrupted in favor of the railroads) interfered in the development of the trucking industry and prevented end consumers from benefiting from reduced shipping costs, as well as wasting tax-payer money. (This example is detailed in the book Basic Economics by Thomas Sowell, Fifth Edition page 158-159. I recommend that anyone advocating government regulation should read it.)

The moral of the story is that we should be careful what we wish for. I could see a similar situation develop where a commission is originally established to regulate AI algorithms, but then becomes corrupt and inhibits development of competing blockchain companies.

So, the question is whether regulation is necessary in the specific case of AI algorithms in specific industries for specific purposes. Aren't these situations in which advocates for marginalized groups can raise awareness so that entrepreneurs can pursue unmet market opportunities or existing companies can revise algorithms? Even this article admits that Amazon was able to identify and retire a problematic algorithm without the involvement of a government regulatory commission. Or, does anyone know of any examples where regulation has been successful in micro-managing how companies operate?


Keith Kirkpatrick

You raise some very good points about the unintended consequences of regulation. Some observers believe that when the largest players in a market support regulation, it provides them with a significant advantage over smaller competitors (look at Walmart supporting minimum wage increases, which can actually hurt small businesses that cannot compete).

I think the value of and approach to regulation will remain an open questions for years, given the complexities and competing factions likely to be involved.


Displaying all 4 comments

Log in to Read the Full Article

Sign In

Sign in using your ACM Web Account username and password to access premium content if you are an ACM member, Communications subscriber or Digital Library subscriber.

Need Access?

Please select one of the options below for access to premium content and features.

Create a Web Account

If you are already an ACM member, Communications subscriber, or Digital Library subscriber, please set up a web account to access premium content on this site.

Join the ACM

Become a member to take full advantage of ACM's outstanding computing information resources, networking opportunities, and other benefits.
  

Subscribe to Communications of the ACM Magazine

Get full access to 50+ years of CACM content and receive the print version of the magazine monthly.

Purchase the Article

Non-members can purchase this article or a copy of the magazine in which it appears.
Sign In for Full Access
» Forgot Password? » Create an ACM Web Account