acm-header
Sign In

Communications of the ACM

ACM Opinion

AI Researchers Building Surveillance Tech and Deep Fakes Resist Ethical Concerns


View as: Print Mobile App Share:
Teacher in lab shows students information about robotics on a laptop

Some say that the ethics disconnect continues past the research phase as young computer scientists make their way into the ranks of corporate AI.

Credit: Hispanolistic/E+/Getty Images

Computer vision forms the foundation for AI-based technology products, but it also forms the underpinnings of tech with immense potential for personal harm and societal damage—from discriminatory facial recognition-fueled surveillance and disinformation-spreading deep fakes to controversial tech used to detect people's emotional states.

While these potential negative impacts are getting more attention, the computer-vision community over the last several years has been reluctant to recognize connections between the research advancements and cool math problem-solving achievements celebrated at one of its most prestigious annual conferences, and the possible uses for that tech once it is baked into apps and software products.

From Protocol
View Full Article


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account