The use of artificial intelligence to create art raises questions about how credit and responsibility should be allocated. In a recent study, MIT Media Lab researcher and Ph.D. student Zivvy Epstein and MIT Sloan School of Management Prof. David Rand found that the answers depend on the extent to which people view AI as human: the more people humanize AI, the greater the responsibility they allocate to the technology .
"The language we use to talk about AI can impact the way people think not just about AI itself, but all of the stakeholders involved. In the art world, AI has the potential to become a major force in artistic endeavors, and understanding how people perceive its role can affect the future of art," says Rand.
Epstein notes, "The way we allocate responsibility is complicated when AI is involved. AI is simply a tool created and used by humans, but when we describe it with human characteristics, people tend to view it very differently. It can be seen more as an agent with independent thought and the ability to create."
The researchers point as an example to a portrait generated by a machine learning algorithm in 2018 that sold at a Christie's art auction. A press release about the portrait stated: "an artificial intelligence managed to create art." Other articles about the portrait emphasized the autonomy and agency of the AI. The initial estimate for the piece was $10,000, and it sold for $432,500.
Many individuals, including machine learning researchers, contributed to the portrait. However, only the art collective that selected, printed, marketed, and sold the image received the $432,000.
In their new study, "Who Gets Credit for AI-Generated Art?," published in iScience, Epstein, Rand, and their co-authors focus on two main research questions: 1) How do people think credit and responsibility should be allocated to various actors in the production of AI art?; and 2) How do these intuitions vary based on people's perceptions of the humanness of the AI system?
They found variation in the extent to which people attribute human qualities to AI. While some see it as more of a human-like technology that can make decisions, others view it as only a tool. The more someone sees AI as human-like, the more credit and responsibility they assign the AI.
The study also showed that perceptions about AI can be manipulated with language. The way AI is described has "material consequences," especially on how people attribute responsibility and credit for the work, says Epstein. If it is described as simply a tool used by humans, then certain humans — like the person who runs the code — get more responsibility and credit. However, if it is described with human attributes — like having the ability to create — then the AI as well as the technologist who wrote that code get more credit and responsibility.
"Anthropomorphizing AI is risky because it can reduce and shift the accountability among humans. It may prevent the public from holding individuals responsible for their actions," Epstein says.
Rand adds, "It's important to understand how the narrative impacts the way people perceive the role of AI in art — and anything else where AI is involved. That narrative can affect incentives to create, collaborations among different parties, compensation, and liability."
Additional authors of the paper include Sydney Levine of MIT's Department of Brain and Cognitive Sciences and the Department of Psychology at Harvard University, and Iyad Rahwan of the Center for Humans and Machines at the Max Planck Institute.
No entries found