In a video of a performance entitled Embodied Machine, Spanish choreographer and dancer Muriel Romero twists her body around eight beams of light. She is wearing a latex bodysuit encrusted with fungal-like forms and the music accompanying her movements is ominous and shrill. Romero appears alone on stage, but this dance is a duet—one participant is human and the other, a machine.
Embodied Machine was produced by Instituto Stocos, a Madrid-based research group that investigates intersections of movement, sound, interactive visuals, biology, and artificial intelligence (AI). The music, developed by Pablo Palacio, is produced in real time; it draws on algorithmically created sounds and motion data generated by Romero's bodily gestures. The result is the sonification of her movements.
Romero's bodysuit contains motion sensors that interact with the eight light beams, causing them to extend, mirror, or oppose her movements; they become her non-human partner. The robotic lights, part of a system of AI-based interactive visuals, were designed by Daniel Bisig, an expert in computer sound and immersive arts at the Zurich University of the Arts. AI adds a layer of "abstraction" that Bisig can manipulate, he explained. Yet, they are unpredictable, so he does not know exactly what it will do when "translated into a thing that dancers see and can interact with."
Embodied Machine builds on over a decade of Bisig's research within AI and creativity. Previous work includes the development of swarm simulations for choreography, the generation of imagined limbs using mass-spring systems and artificial neural networks, and "puppeteering AI," the use of machine learning (ML) to control an artificial dancer.
While Bisig and Romero are comfortable working with AI, anxieties did arise when prompt-based tools, such ChatGPT, DALL-E, and Stable Diffusion, appeared to pose a threat to creative practitioners, especially musicians, writers, and visual artists. However, the conversation already has shifted towards a more balanced tone, one that recognizes that AI is more likely to "augment, rather than replace, human creators." Creative communities are now actively discussing how to co-create with AI.
The authors of a workshop discussion paper presented at the 2023 ACM CHI Conference on Human Factors in Computing Systems said that, "It is not clear what kinds of collaboration patterns will emerge when creative humans and creative technologies work together. …Together we will develop theories and practices in this intriguing new domain."
Human and AI co-creativity has applications throughout arts. In the visual sphere, Memo Akten, an artist and researcher at the University of California San Diego Department of Visual Arts, employs deep learning, speculative simulations, and "data dramatization" to investigate AI, ecology, ethics, and spirituality. Chinese-Canadian artist Sougwen Chung has built AI-driven robots that use recurrent neural networks to learn her drawing style, allowing Chung to study synergies and tensions between hand-made and machine-made marks.
In music, drummer Jojo Mayer mixes analog and AI percussion to perform live collaborations with a machine. At the Centre for Creative Performance & Classical Improvisation at the U.K.'s Guildhall School of Music and Drama, London, David Dolan and Oded Ben-Tal, a composer and researcher at Kingston University, London, performed an improvised musical dialogue with a semi-autonomous AI system.
Charles Patrick Martin, a computing and cybernetics expert at the Australian National University, is a co-author of the ACM GenAICHI 2023 discussion paper. Martin´s work focuses on bodily gestures produced during music-making, rather than notes themselves. "I was interested in using machine learning to predict or extend these kind of gestures, or create them on our behalf, and then embed some kind of prediction system within a musical instrument itself," he explained.
Martin's lab produces instruments that aim to "predict their human player's intentions and sense the current artistic context." Approaches include mapping inputs from sensors and combining them with a synthesis system to generate sound, and the use of ML for predicting musical intention.
Martin wants agency over what he creates; he builds his own datasets and tools. Large generative AI models owned by companies are "very, very secret" he said. Further, pre-trained models are "boring" to many artists. However, creative curiosity can arise when a tool makes mistakes or produces weird results, "It's happily generating some scene, but then there are six fingers on the hand or something; that's when we find it uncanny or interesting to look at," Martin explained.
Agency is also critical to Gabriel Vigliensoni, a music artist and researcher at Concordia University, Montreal. Vigliensoni composes and performs music using machine learning and small datasets of rhythms and sounds he builds himself. Unlike models built on a big data, small datasets support a more "creative conversation" between human and machine, said Vigliensoni. "In the types of models and architecture I'm using, the latency is smaller; there's a more direct link between the gesture and the output of the model."
Yet Vigliensoni is clear that his work is not a collaboration with AI. "That would be putting too much agency on the tool. I'm the entity with the agency."
Agency and control are mentioned frequently by these creative practitioners; the ability to code, curate data, and build systems and tools are vital. Bisig has a background in robotics and Martin in mathematics, which gives them an interdisciplinary mindset and the skills to make— rather than simply use— AI technologies. Yet not all creatives are computer scientists, nor do they wish to be.
The future of human/machine co-creation may boil down to the extent to which AI skills can, or cannot, be acquired in art schools and musical conservatories, where coding and databases could become as commonplace as paintbrushes and cellos.
Karen Emslie is a location-independent freelance journalist and essayist.
No entries found