acm-header
Sign In

Communications of the ACM

ACM Opinion

AI Isn't a Solution to All Our Problems


View as: Print Mobile App Share:
A drawing of the human brain.

Artificial intelligence is here to stay, but as with any helpful new tool, there are notable flaws and consequences to blindly adapting it.

Credit: Eduard Muzhevskyi/Getty Images

From the esoteric worlds of predictive health care and cybersecurity to Google's e-mail completion and translation apps, the impacts of AI are increasingly being felt in our everyday lived experience. The way it has crepted into our lives in such diverse ways and its proficiency in low-level knowledge shows that AI is here to stay. But like any helpful new tool, there are notable flaws and consequences to blindly adapting it. 

AI is a tool—not a cure-all to modern problems.

AI IS EVERYWHERE

AI tools aim to increase efficiency and effectiveness for organizations that implement them. As I type this using Google Docs, the text-recognition software suggests action items to me. This software is built upon Google's machine learning package TensorFlow—the same software that powers Google Translate, AirBnB's house tagging, brain analysis for MRIs, education platforms, and more. AI is also used in legal cases where it's being employed to help legal advocates take on more cases because they need to spend less time on initial interviews with AI's help. It isn't too far from now that a patient would be given a preliminary diagnosis by a computer before seeing a doctor. While AI has crepted in to benefit most aspects of our lives, how do we know that it's built responsibly?

Every AI incorporates the values of the people that built it. The large amounts of data used to create these tools can come from surprising sources. Artificial intelligence "farms" globally employ people to perform repetitive classification tasks such as image recognition, creating the categorized data necessary to build an AI. Beyond AI farms, online crowdsourcing projects are able to create robust tools because thousands of people come together to curate data. However, people bring biases and subjectivity that can influence AI, intentionally or not. In 2016, Microsoft launched an AI chatbot, Tay, which evolved by interacting with Twitter users. The following 24 hours were a disaster—a lesson in how quickly AI can evolve from excited to chat with humans for the first time to supporting Hitler. 

Relying on data sets curated by humans incorporates the values and judgments of the companies producing the AI, the people implementing it, and the users. AIs are not created in a vacuum; they reflect the creators and users that create them.

 

From Scientific American
View Full Article

 


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account