Feathers were ruffled with a policy shift at this year's IEEE/CVF Computer Vision and Pattern Recognition Conference that, for the first time, "strongly encouraged" researchers to include a discussion about potential negative societal impacts of their research in their submission forms.
Researchers cherish their academic freedom and are "super aware" of the potential impact of their research, said one conference attendee. Asking them to predict future applications restricts that independence. "They are not good at telling you what the applications of their research are," he said. "It's not their job."
Technologists are incentivized to build the highest-performing systems and get them to market quickly, said Navrina Singh, CEO of Credo AI. "Anytime we would talk about compliance and governance, the technologists were like, 'Oh, this is not my problem. That's not my space. That's not my incentive structure.'"
Practitioners say the ethics disconnect persists as young computer vision scientists make their way into the ranks of corporate AI.
From Protocol
View Full Article
No entries found