acm-header
Sign In

Communications of the ACM

ACM News

It's Too Easy to Hide Bias in Deep-Learning Systems


View as: Print Mobile App Share:
Looking for explanations from algorithms.

Researchers have developed algorithms for understanding decision-making automatons, forming the new subfield of explainable AI.

Credit: Eric Frommelt

If you're on Facebook, click on "Why am I seeing this ad?" The answer will look something like "[Advertiser] wants to reach people who may be similar to their customers" or "[Advertiser] is trying to reach people ages 18 and older" or "[Advertiser] is trying to reach people whose primary location is the United States." Oh, you'll also see "There could also be more factors not listed here." Such explanations started appearing on Facebook in response to complaints about the platform's ad-placing artificial intelligence (AI) system. For many people, it was their first encounter with the growing trend of explainable AI, or XAI. 

But something about those explanations didn't sit right with Oana Goga, a researcher at the Grenoble Informatics Laboratory, in France. So she and her colleagues coded up AdAnalyst, a browser extension that automatically collects Facebook's ad explanations. Goga's team also became advertisers themselves. That allowed them to target ads to the volunteers they had running AdAnalyst. The result: "The explanations were often incomplete and sometimes misleading," says Alan Mislove, one of Goga's collaborators at Northeastern University, in Boston.

When advertisers create a Facebook ad, they target the people they want to view it by selecting from an expansive list of interests. "You can select people who are interested in football, and they live in Cote d'Azur, and they were at this college, and they also like drinking," Goga says. But the explanations Facebook provides typically mention only one interest, and the most general one at that. Mislove assumes that's because Facebook doesn't want to appear creepy; the company declined to comment for this article, so it's hard to be sure.

Google and Twitter ads include similar explanations. All three platforms are probably hoping to allay users' suspicions about the mysterious advertising algorithms they use with this gesture toward transparency, while keeping any unsettling practices obscured. Or maybe they genuinely want to give users a modicum of control over the ads they see—the explanation pop-ups offer a chance for users to alter their list of interests. In any case, these features are probably the most widely deployed example of algorithms being used to explain other algorithms. In this case, what's being revealed is why the algorithm chose a particular ad to show you.

From IEEE Spectrum
View Full Article

 


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account