Leading researchers at institutions such as OpenAI, Google, and the U.K.'s Alan Turing Institute and University of Cambridge have proposed a system of financially rewarding users to spot bias in algorithms.
The concept was inspired by the bug bounty programs created to encourage software developers to look for flaws. Potential users could include artificial intelligence (AI) researchers with direct access to algorithms, the public, and journalists who encounter apparent bias in everyday systems.
Cambridge's Haydn Belfield said incentivizing users to rigorously check AI systems could help identify problems earlier in development, while OpenAI's Miles Brundage suggested monetary rewards would encourage developers to spot issues not covered in public documentation.
The Alan Turing Institute's Adrian Weller said financial compensation could encourage greater transparency on algorithmic bias, but cautioned that full transparency could reveal how to exploit such systems.
From Financial Times
View Full Article - May Require Paid Subscription
Abstracts Copyright © 2020 SmithBucklin, Washington, DC, USA
No entries found