acm-header
Sign In

Communications of the ACM

ACM Opinion

Removing the Risk of AI Bias in the Public Sector


View as: Print Mobile App Share:
Zachary Jarvinen is head of product marketing, AI, and analytics at OpenText.

The best way to prevent bias in artificial intelligence systems is to implement ethical code at the data collection phase, says Zachary Jarvinen, head of product marketing, artificial intelligence, and analytics at OpenText.

Credit: Zachary Jarvinen

The public sector is facing a data explosion. Digital citizen services, Internet of Things (IoT) devices and enterprise applications are collecting huge amounts of data - at a much faster rate of growth than ever before. While the public sector has previously relied on human bureaucracies to collect, manage and process the information required to serve citizens, this is no longer an option. The sheer quantity of data in play today means unassisted analysis by a human is not viable. Instead, the public sector is turning to technology.

Public sector bodies must meet demands for both information transparency and improved services delivered at a lower cost to fit within budget constraints. While this creates certain challenges, advances in technology have made artificial intelligence (AI) an innovative way to address some of these issues. AI systems have the ability to process more complex information at a faster rate. They provide scale and precision to augment human decision making as well as enabling organisations to derive actionable insights from their data to improve citizen services.

However, certain ethical questions must be considered as we begin to rely even further on machine and AI-enabled decision making. This is particularly vital for government departments and public bodies looking to automate functions given their impact on both citizens themselves and the wider economy. So what practical steps can be taken to drive ethical, unbiased AI use in the public sector?

Protect datasets against bias

Given our increasing reliance on AI, UK police officers have raised concerns about the use of biased AI tools which may amplify prejudices and potentially lead to discrimination in police work. Commissioned by the UK government's Centre for Data Ethics and Innovation, this recent report brought to light a number of potential issues. One of the key concerns amongst UK police officers is that using existing police records to train machine-learning tools may result in AI systems skewed by the arresting officers' own prejudices. The potential ramifications of this are unsettling. For example, certain members of society may be targeted with 'stop and search' police tactics more often.

This potential for AI bias is a valid concern. AI systems are built on data, meaning they will only be as objective and unbiased as the data we put into them. If human bias is introduced into datasets, bias will be generated in the outcomes of the application of those datasets.

The best way to prevent bias in AI systems is to implement ethical code at the data collection phase. This must start with a large enough sample of data to yield trustworthy insights and reduce subjectivity. Therefore, a robust system able to collect and process the richest and most complex sets of information, including both structured and unstructured data, is required to produce the most accurate insights. Data collection principles should also be examined by teams that include members form a variety of backgrounds, views and characteristics.

Yet even this careful, preventive approach cannot fully protect data against bias at all times. So results must be monitored for signs of prejudice, and any notable correlations between race, sexuality, gender, religion, age and other similar factors should be investigated. If a bias is detected, organisations can implement mitigation strategies such as adjusting sample distributions.

Employ diverse teams for ethical AI

The UK recently became the first country to pilot diversity regulations for staff working on AI programs destined for government use. These guidelines state that teams commissioning the technology from private companies should "include people from different genders, ethnicities, socioeconomic backgrounds, disabilities and sexualities". In my view, this is a welcome step in the right direction towards ensuring the ethical implementation and use of AI technologies. To take in perspectives that span the fullest possible spectrum, organisations should also look into involving an HR or ethics specialist to work with data scientists to make sure that AI recommendations align with the organisation's cultural values.

In the next few years, we will see AI technology completely transform the public sector as more menial tasks are digitalised through AI and process automation. Yet this shouldn't be cause for fear. It will enable greater efficiency while simultaneously taking some of the day-to-day strain off employees. Concerns around AI bias can be addressed if organisations implement the right principles and processes. Ultimately, AI systems are only as good as the data put into them, so ethical code must be put in place from the very start. By beginning with a clear goal that aligns with an organisation's values and routinely monitoring the outcomes resulting from AI systems, the public sector will be able to reap the rewards of AI and automation without putting their citizens at risk of AI bias.

Zachary Jarvinen is head of product marketing, AI, and analytics at OpenText.


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account