Exploring the Ethics of AI through Narrative

John Thickstun, an assistant professor at Cornell, and filmmaker Sophie Barthes collaborate on a screenplay.
A workshop at Stanford convened filmmakers and researchers to think about the implications of artificial intelligence.
INT, CONFERENCE ROOM, STANFORD UNIVERSITY
Men and women in small groups of two or three fill the room. Some huddle in quiet discussion, others type on computers, still others appear lost in thought. Projected on a screen in the background: STORIES FOR THE FUTURE – 2024.
This was the scene last summer at Stanford’s Institute for Human-Centered Artificial Intelligence (HAI). Filmmakers and AI researchers came together for two days to consider how stories shape perceptions — and even policies — about AI. The workshop’s focus was to foster collaborations between filmmakers and AI researchers and create new narratives about AI.
Now, the organizers, led by Stanford computer science master’s student Isabelle Levent and screenwriter and School of Humanities & Sciences lecturer Adam Tobin, have published a booklet summarizing the workshop, scenes developed by its participants, and main takeaways from the experience.
In this conversation, workshop participants Sophie Barthes, a Franco-American filmmaker whose third feature film, “The Pod Generation,” premiered opening night at the 2023 Sundance Film Festival, and John Thickstun, an assistant professor at Cornell who completed a postdoc at HAI’s Center for Research on Foundation Models, discuss how this workshop reshaped their understanding of AI and creativity.
At the conference, you two worked together on the beginning of a script, right?
John Thickstun: This was our goofy watermarking auditors idea. The prompt for us was to think about a paper that I had recently written, which might be interesting to craft a story around. I came up with this one on watermarking text — essentially inserting secret messages into generated text documents that let people go back forensically and tell whether the text was written by a person or by a model. We spent some amount of time getting our heads around the premise of what might be interesting about a watermark, and then we tried to figure out how to tell a story about that.
Do you remember anything about that process of moving from this academic idea into the more creative space of telling a story?
Sophie Barthes: This is what I do all the time: I try to take ideas that belong to the world and put them into a narrative that would work for film. But I'm curious, John, how that was for you. You live in that world of ideas.
Thickstun: It was hard. It gives me a lot of empathy for screenwriters who get technology wrong on screen. I felt like it was important that we faithfully run with this idea, but it’s so easy to see how creativity and narrative and drama can take precedence over accuracy.
Barthes: You don't want to be didactic. You don't want to lecture. So it's hard to give information that is scientifically accurate and makes sense and also make it digestible and entertaining.
Part of the premise of the conference was that AI, in particular, can be a difficult technology to convey. Sophie, you've written and directed sci-fi movies; John, you work on AI. Is there something that's especially difficult about this subject?
Barthes: I think so because it's new to us and it feels very abstract. So in movies we’re trying to incorporate artificial intelligence in a way that's not boring or didactic or too abstract though it’s not yet fully part of our daily life. I think the challenge for writers is trying to humanize these stories, to make them relevant to the human experience. How is it for you, John? Every time you see a movie about artificial intelligence does it feel completely fabricated and not relevant to your experience?
Thickstun: We talk a lot in the community about the movie “Her,” which came out over a decade ago. We all feel in some ways that it was remarkably prescient. I don’t think it’s accidental that many of the metaphors and stories in that movie have driven and inspired some of the recent technological developments. But I think AI is hard because it's ephemeral. If you’re telling a story about a computer in the fifties, you can put the computer on screen; you can have HAL (the AI in the movie "2001: A Space Odyssey"). But intelligence is sort of hard to portray. It's not a visual construction.
It's interesting. You mentioned that “Her” actually drove development. I think of the way in which cutting-edge technology bleeds into narrative but don't think of it moving the other way.
Thickstun: It absolutely moves the other way.
Barthes: After my film “The Pod Generation” I had someone contact me from San Diego who was really interested in knowing how the artificial womb technology works. I said, "Well, it's just a shell. I mean, it's a movie prop, so none of it works." But he was super curious about the design because he thinks his company is going to get there.
Thickstun: And when scientists need to raise funds, it's actually much easier to do it if uncreative people with money can see a movie about a technology you’re describing. If I come to a VC and I pitch some wild idea, they don't get it. But if someone like Sophie figures out how to write a movie about that technology and then the VC watches the movie, well, they get it.

Workshop participants explored new narratives around artificial intelligence.
Was it challenging to work together, or was it productive in any unexpected ways?
Barthes: For me, it was too short. We had presentations in the morning and then something like 10 minutes to write a scene. It was just an exercise. But it planted the seeds of what could have been.
Thickstun: I'm pretty sure we had more than 10 minutes, but it felt like 10 minutes. Yeah, they set us down and said, "OK, go be creative." But you asked about what came out of the conversation. One thing that came up that I’ve been thinking every week since the conference is related to this point we were discussing earlier, about how these conversations go both ways: Technology informs narrative and art, but art informs and guides the development of technology. To some extent I came into this conference with a plea for better stories about technology, more positive and hopeful stories. And, I can't remember who said it to me, but someone said that if we simply write hopeful, optimistic stories about technology, it's hard not to feel like a shill for the big businesses building this technology. This just stopped me dead in my tracks.
Barthes: It’s the difference between advertising and filmmaking. But I think maybe more than hopeful stories, we need truthful stories. I think it’s the complexity that comes with truthfulness that is more important than hopefulness. We know that technology is made by humans, and so it has the complexity of humans.
Thickstun: And maybe a lot of the technology we're building is just fundamentally dystopian. I don't discount that.
Barthes: With the artificial womb, I was asking myself this question all the time. It’s not like the technology is good or bad. It’s an extension of who we are. So we are good and bad all the time. The technology is good and bad all the time.
Do these ethical questions arise for you, John, when you’re doing research?
Thickstun: Well, I first want to comment more broadly on the technology that I work on: generative AI. Ten years ago, when I started working on this, no one was thinking about ethical questions because nothing worked. I certainly wasn’t imagining the sort of human and ethical challenges that we see today: How do you think about the economic effects on people who make a living creating art? and How do you think about ownership rights and copyright when you're training these models on other people’s work and labor?
Generative modeling got good relatively quickly, and so we had to start taking these ethical and societal questions seriously. I think a lot of us have been caught flat-footed. I’m trying to be more thoughtful up front with the initial development of things that I work on, but it's hard. It’s hard to guess where the technology is going and what the implications will be.
Barthes: You can't foresee all the problems. When you’re experimenting, you have no idea where you’ll land, right?
Thickstun: Yeah. And I think that if you try to self-censor and say, "Well, I’m only going to work on these prosocial technologies," — I don’t have confidence that I can predict what will be prosocial. I didn’t get the complexity of generative AI 10 years ago, even five years ago. A good friend of mine from graduate school worked on computer vision research and around 2018 he made the ethical decision that he just wasn't going to work on it anymore because of all of the applications to smart bombs and warfare and surveillance technology. That’s one position that you can take.
Barthes: You have the same responsibility in film. When you look at Marvel and the use of violence and guns — that all comes from Hollywood, where everyone is anti-gun. And yet all they do is make movies that are violent, where people shoot each other. You could instead make a movie like Kubrick’s “Full Metal Jacket” to show the horror of war. I think these ethical questions apply to everyone. I don't think filmmakers are less guilty than scientists. It’s about how each of us confronts the choices we make. That’s why you can’t just say, "Let’s tell good, hopeful stories." The moment we start with something there will be a million implications, a million choices.
Thickstun: I think that this premise about writing a good, hopeful story falls victim to the same problem of trying to tell purely scientifically accurate stories. At some point you simply follow the narrative where it goes. If it’s not hopeful and you try to stuff it into that box, well, now you’re not writing a good movie anymore. You’re writing propaganda.
What’s next for each of you, and are you carrying anything from this conference into that work?
Barthes: I'm working on several projects. One is a surreal film about Edward Hopper and his muse; another is an adaptation of a New Yorker story. And, thanks to this conference, I met the writer Alexander Weinstein, and we are thinking of developing a futuristic project together about AI — an idea that came to us while talking at the conference.
Thickstun: I do a lot of work on AI for music, and I think constantly about how we can steer the development of music technologies toward more prosocial applications. I'm in the early stages of organizing a workshop that brings together researchers and musicians to think about the future of music; in many ways, my vision for this workshop is inspired by my experience at Stories For The Future.