Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Spellburst: A Large Language Model–Powered Interactive Canvas for Generative Artists | Stanford HAI

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
news

Spellburst: A Large Language Model–Powered Interactive Canvas for Generative Artists

Date
September 13, 2023
Topics
Arts, Humanities
Design, Human-Computer Interaction
Piranka/iStock

This new creativity support tool helps artists who work in code explore ideas using natural language and iterate with precision.

Generative artists work in code. Using programming languages like Processing or AI text-to-image tools, they translate expressive semantics into lines of code that form swirling, colorful patterns or surrealistic landscapes. 

But coding art is a time-consuming, complicated process. While a pencil’s eraser might fix an errant line or a little yellow might brighten a painting’s dark skyline, improving generative art takes trial and error through numerous iterations with often frustratingly opaque interfaces.

After interviewing expert digital artists on these creative frustrations, scholars from Stanford and Replit have developed a tool called Spellburst, recently published on preprint service arXiv, to improve the ideation and editing process.

“Translating an artist’s imagination into code takes a lot of time, and it’s very difficult,” says Hariharan Subramonyam, assistant professor at the Graduate School of Education and a faculty fellow at the Stanford Institute for Human-Centered AI. “A large language model can give you a good starting point. But when the artist wants to explore different textures, different colors or patterns, at that point they want finer control, which large language models can’t provide. Spellburst essentially helps artists seamlessly switch between the semantic space and the code.”

Built with the large language model GPT-4, Spellburst allows artists to input an initial prompt, say, “a stained glass image of a beautiful, bright bouquet of roses.” The model then generates the code to render that concept. But what if the flowers are too pink, or the stained glass doesn’t look quite right? Artists can then open a panel of dynamic sliders generated using the previous prompt to change any aspect of the image or can add modifying notes (“make the flowers a dark red”). These creators can merge different versions (“combine the color of the flowers in version 4 with the shape of the vase in version 9”). The tool also allows artists to transition from prompt-based exploration to program editing – they can click on the image to reveal the code, allowing for more granular fine-tuning. 

A look at Spellburt's user interface, from prompt to refinement to final code

‘Larger Creative Leaps’

To better inform the design of Spellburst, the research team interviewed 10 expert creative coders on how they develop their concepts, their creative workflow, and their biggest challenges. Later, the team tested the tool with expert generative artists.

“The feedback was overall very positive,” Subramonyam says. “The large language model helps artists bridge from semantic space to code faster, but it also helps them explore many different variations and take larger creative leaps.”

The tool of course has its limitations. The research team saw errors and unexpected results in some of the prompts, particularly in version mergers, and it was unclear which prompts would lead to the desired results. Plus, the small sample of artists providing feedback certainly doesn’t represent the full generative artist community.

But the hope is that this tool will be useful for coder artists and maybe even a broader audience, Subramonyam says. 

“We want to release the tool as open-source later this year so that artists can start using it, but we also want to study how a tool like this can help novices learn how to make art with code.”

The paper's authors include Tyler Angert, Replit product designer; Miroslav Ivan Suzara, Stanford Ph.D student in education; Jenny Han, research engineer, UCI School of Education; and Stanford CS graduate student Christopher Lawrence Pondoc.

Stanford HAI’s mission is to advance AI research, education, policy and practice to improve the human condition. Learn more. 

Piranka/iStock
Share
Link copied to clipboard!
Authors
  • headshot
    Shana Lynch

Related News

How AI Shook The World In 2025 And What Comes Next
CNN Business
Dec 30, 2025
Media Mention

HAI Co-Director James Landay and HAI Senior Fellow Erik Brynjolfsson discuss the impacts of AI in 2025 and the future of AI in 2026.

Media Mention
Your browser does not support the video tag.

How AI Shook The World In 2025 And What Comes Next

CNN Business
Industry, InnovationHuman ReasoningEnergy, EnvironmentDesign, Human-Computer InteractionGenerative AIWorkforce, LaborEconomy, MarketsDec 30

HAI Co-Director James Landay and HAI Senior Fellow Erik Brynjolfsson discuss the impacts of AI in 2025 and the future of AI in 2026.

Stanford Research Teams Receive New Hoffman-Yee Grant Funding for 2025
Nikki Goth Itoi
Dec 09, 2025
News

Five teams will use the funding to advance their work in biology, generative AI and creativity, policing, and more.

News

Stanford Research Teams Receive New Hoffman-Yee Grant Funding for 2025

Nikki Goth Itoi
Arts, HumanitiesEthics, Equity, InclusionFoundation ModelsGenerative AIHealthcareSciences (Social, Health, Biological, Physical)Dec 09

Five teams will use the funding to advance their work in biology, generative AI and creativity, policing, and more.

In Love With A ChatBot?
Psychology Today
Nov 19, 2025
Media Mention

The science behind AI romances; plus the benefits and risks for mental health. A Stanford HAI study shows that because AI companions can provide unlimited affirmation and interaction, they may create unrealistic expectations for relationships.

Media Mention
Your browser does not support the video tag.

In Love With A ChatBot?

Psychology Today
Design, Human-Computer InteractionSciences (Social, Health, Biological, Physical)Nov 19

The science behind AI romances; plus the benefits and risks for mental health. A Stanford HAI study shows that because AI companions can provide unlimited affirmation and interaction, they may create unrealistic expectations for relationships.