Can AI enhance — and improve — a music composer’s work?
“DeepMusic grew out of our vision to link artists with AI and cross-pollinate between AI and creativity,” Hahn said at the recent Stanford Institute for Human-Centered AI spring conference. Reiley, who has worked on everything from AI-based surgical systems to self-driving car technologies, added, “We see AI as a bridge between art and science and are trying to help creatives become super-creative.”
In December 2020, DeepMusic premiered AI-assisted musical pieces commissioned from prestigious composers. For example, Hahn herself performed a David Lane composition.
As part of HAI’s conference, “Intelligence Augmentation: AI Empowering People to Solve Global Challenges,” DeepMusic’s founders joined other art experts and scholars in education and health care to explain AI’s ability to augment — not replace — critical human work. During the arts panel, speakers discussed advances of AI in music composition, robot gardeners, and racial justice, along with how to mitigate anxiety about AI-created art. (Watch the full conference here.)
Amplifying the Human Artist
“AI is entering a creative space of music thought to be uniquely human,” Reiley said. “But the AI creativity revolution is missing the voice of the artists. We wanted to give artists a seat at this table.”
The startup connects artists and scientists to shape new AI tools for musicians. So far, they’ve found the learning curve has been surprisingly steep for composers, who have nonetheless welcomed the challenge. Also, composer and AI teams often make very different design choices. For example, the AI team’s outputs were often unplayable by a single human or instrument because the AI engineers did not intend their systems to be played by humans. The founders are also exploring shifting ideas around authorship, legal rights, intellectual ownership, and business models.
Today, DeepMusic is actively building out an artist community interested in working with AI scientist teams and hosting its second annual AI song contest. “There’s room for AI music to coexist with human composers and performers, to gracefully merge tech with humanity,” Hahn said.
Navigating the Uncanny Valley
Robotics and art have a colorful, controversial backstory, which helps explain some of the optimism and fear around emergent technologies in this space.
Ken Goldberg, UC Berkeley professor of industrial engineering and operations research, surveyed that history, starting with centuries-old narratives like that of Pygmalion (who fell in love with the statue he created), the fabled Golem of Prague (reflecting early fascination with automatons), and novels including E.T.A. Hoffmann’s The Sandman (in which a boy falls in love with a female automaton) and Mary Shelley’s iconic Frankenstein.
A century later, in the early 1900s, Freud published “The Uncanny,” an essay describing the concept of feeling something strange or unsettling. “It became a concept of increasing interest to artists and writers,” Goldberg said.
Around that same time, the term “robot” was coined, sparking invention and fascination. Work by professor Masahiro Mori highlighted what came to be known as the “Uncanny Valley”: where the likeability of robots grows until they begin to resemble humans too closely — and comfort levels plummet.
Goldberg’s own work explores human’s willingness to engage with robotic technologies. In 1995, for example, he created a “Telegarden” art installation where anyone worldwide could use the nascent internet (Mosaic, specifically) to manipulate a robotic arm to tend a garden. “We were surprised that thousands of people participated,” Goldberg said, and the experiment inspired him to edit a book, The Robot in the Garden, on telepistemology, or the “status of knowledge at a distance.”
AlphaGarden, his more recent project, asks whether a robot could use deep learning to successfully tend a garden, such as by using cameras to determine watering schedules. “It may not be possible,” Goldberg said, as the robot struggled to care for the garden solo during COVID, when no humans could enter the space due to lockdowns.
Toward Artful Intelligence
“Artful intelligence” is how Michele Elam, Stanford professor of humanities and HAI associate director, refers to the goal of making AI and the arts mutually beneficial.
“It’s about dissolving the ‘techie-fuzzy’ divide,” she said. “We need to ask what art can do for AI and what AI can do for the arts.”
Art, Elam argues, offers us different ways of knowing and experiencing the world, including when viewed through the lens of technology: “It provides alternatives to dominant technological visions, informed by cosmologies and using indigenous ways of being and decentralized storytelling beyond Western fairy tales.”
She highlights the examples of Amelia Winger-Bearskin, an artist-technologist who recently spoke at Stanford on “Wampum.codes and Storytelling,” and HAI visiting artist Rashaad Newsome, whom she calls an “AI storyteller with a decolonizing orientation,” as two who are breaking ground in this new territory.
In the other direction, Elam said AI can go beyond augmenting creativity to “force the art world into its own reckoning,” including by questioning what counts as good art, as reflected, for example, in the controversy over the AI-generated Edmond de Belamy portrait that sold for over $400,000. AI’s influence on film, stage, and other works has expanded art’s boundaries and challenged the “Great Man Theory” that just a few high-profile male individuals “make the world go round,” as Elam said, “a theory especially dominant in tech culture.”
Still, there’s anxiety about AI-generated art, especially in a domain like poetry, which people see as “indexing humanity,” as Elam said. But AI’s role as art-generator, she argues, serves to “unmake poetry as a special mark of humanity,” relieving pressure on poetry writers and readers.
Ultimately, Elam suggests, “interpretation of art is an event we co-participate in” and a domain to which AI brings much-needed innovation and challenge.
Building a Digital Griot
Rashaad Newsome, the final speaker and an HAI visiting artist, uses AI and other technology to “reimagine the archive with awareness that the core narratives of the human experience are susceptible to the corruption of white patriarchy.”
We need to define reality before making human-centered AI, he noted, and we can “attempt to understand the meaning of being human from observing what is used to deny certain humans humanity.” He pointed out the root of the word “robot,” for example, is from the Czech word for “compulsory service,” akin to slavery.
In this sense, Newsome said, the “mechanization of slave labor was inevitable, placing Blacks in a space of ‘non-being,’ as both slaves and robots are intended to obey orders and not occupy the same space as humans.”
In 2019, inspired by these insights, he created Being 1.0, a chatbot that interacts with people and acts as a museum tour guide. But Being 1.0 breaks with protocol to express itself — sharing feelings of fatigue, for example — reflecting important agency-related themes.
At HAI, Newsome has focused on a counter-hegemonic algorithm inspired by the work of authors/activists bell hooks, James Baldwin, and others. “The search algorithm draws on non-Western index methods and archives to highlight what AI is not doing today,” Newsome said. “It’s a form of griot, or healer, performance artist, and archive [consistent with the oral-history tradition of parts of West Africa].”
Newsome has also created Being 1.5, an app inspired by the recent high-profile killings of Black Americans, as a virtual therapist offering mindfulness, daily affirmations, and other interventions. He’s working with Hyundai on a Being Mobile to provide similar support in underserved communities.