acm-header
Sign In

Communications of the ACM

ACM News

Canada’s New Federal Directive Makes Ethical AI a National Issue


View as: Print Mobile App Share:
A digitized maple leaf.

Canada is leading the world in artificial intelligence, thanks largely to huge government investments.

Credit: Techvibes

At the perfect intersection of technology and civil service, every government process will be an automated one, streamlining benefits, outcomes, and applications for every citizen within a digitally-enabled country.

With that approach comes a significant layer of protocol that is necessary to ensure citizens feel empowered regarding decision-making processes and how their government addresses needs from a digital perspective. Right now, Canada is leading the world in artificial intelligence (AI), thanks largely to huge government investments like the Pan-Canadian Artificial Intelligence Strategy. The growing field is pervasive right now—there is hardly an industry it has not disrupted, from mining to legal aid. Government is no different. In fact, government might be one of the most obvious choices as to where automated decision processes can save time and money.

The dilemma that arises from a government's enactment of AI is an amplification of the problems surrounding how any organization embraces the burgeoning technology: How do you ensure this AI platform or service fairly and adequately caters to the needs of its clients? A company like Facebook uses AI for a number of reasons, such as ad targeting or facial recognition in photos. Sure, their algorithms may result in creepily-accurate ads popping up in the newsfeed, but the ethics of their machine learning solutions really only affect a person's privacy—or lack thereof in recent years.

A government, on the other hand, must take into consideration a vast array of details as they begin to adopt new technologies like AI. Governments deal with privacy, of course—but they also deal with health care, immigration, criminal activity, and more. So the problem for them revolves less around "Which kind of AI solution should we use?" and more around "Which ones shouldn't we use?"

The AI platforms a government can't touch are the ones that offer little to no transparency and are riddled with bias and uncertainty. If a decision from the government is rendered through an automated process, a citizen has a right to understand how that decision came to be. There can be no protection of IP and no closely-guarded source code. For example, if an applicant for a potential criminal pardon is denied that pardon by an AI system trained with historical data, that applicant deserves to understand exactly why they may have been turned down.

The Canadian government's solution to this issue is the Directive on Automated Decision-Making, released earlier this week. Alluded to in late 2018 by then-Minister of Digital Government Scott Brison, it is a manual describing how the government will use AI to guide decisions within several departments. At the heart of the directive is the Algorithmic Impact Assessment (AIA), a tool that determines exactly what kind of human intervention, peer review, monitoring, and contingency planning an AI tool built to serve citizens will require.

 

From Techvibes
View Full Article

 


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account