Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
AI-powered EDGE Dance Animator Applies Generative AI to Choreography | Stanford HAI

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
news

AI-powered EDGE Dance Animator Applies Generative AI to Choreography

Date
April 20, 2023
Topics
Design, Human-Computer Interaction
Machine Learning

AI analyzes the music’s rhythmic and emotional content and creates realistic dances that are also physically plausible — a real dancer could perform them.

Stanford University researchers have developed a generative AI model that can choreograph human dance animation to match any piece of music. It’s called Editable Dance GEneration (EDGE).

“EDGE shows that AI-enabled characters can bring a level of musicality and artistry to dance animation that was not possible before,” says Karen Liu, a professor of computer science who led a team that included two student collaborators, Jonathan Tseng and Rodrigo Castellon, in her lab.

The researchers believe that the tool will help choreographers design sequences and communicate their ideas to live dancers by visualizing 3D dance sequences. Key to the program’s advanced capabilities is editability. Liu imagines that EDGE could be used to create computer-animated dance sequences by allowing animators to intuitively edit any parts of dance motion.

For example, the animator can design specific leg movements of the character, and EDGE will “auto-complete” the entire body from that positioning in a way that is realistic, seamless, and physically plausible as well — a human could complete the moves. Above all, the moves are consistent with the animator’s choice of music.

Like other generative models for images and text — ChatGPT and DALL-E, for instance — EDGE represents a new tool for choreographic idea generation and movement planning. The editability means that dance artists and choreographers can iteratively refine their sequences move by move, position by position, adding specific poses at precise moments. EDGE then incorporates the additional details into the sequence automatically. In the near future, EDGE will allow users to input their own music and even demonstrate the moves themselves in front of a camera.

“We think it’s a really a fun and engaging way for everyone, not just dancers, to express themselves through movement and tap into their own creativity,” Liu says.

“With its ability to generate captivating dances in response to any music, we think EDGE represents a major milestone in the intersection of technology and movement,” adds Tseng. “It will unlock new possibilities for creative expression and physical engagement,” says Castellon.

The team has published a paper and will formally introduce EDGE at the Computer Vision and Pattern Recognition conference in Vancouver, British Columbia, in June.

Stanford HAI’s mission is to advance AI research, education, policy and practice to improve the human condition. Learn more. 

Share
Link copied to clipboard!
Contributor(s)
Andrew Myers

Related News

AI Leaders Discuss How To Foster Responsible Innovation At TIME100 Roundtable In Davos
TIME
Jan 21, 2026
Media Mention

HAI Senior Fellow Yejin Choi discussed responsible AI model training at Davos, asking, “What if there could be an alternative form of intelligence that really learns … morals, human values from the get-go, as opposed to just training LLMs on the entirety of the internet, which actually includes the worst part of humanity, and then we then try to patch things up by doing ‘alignment’?” 

Media Mention
Your browser does not support the video tag.

AI Leaders Discuss How To Foster Responsible Innovation At TIME100 Roundtable In Davos

TIME
Ethics, Equity, InclusionGenerative AIMachine LearningNatural Language ProcessingJan 21

HAI Senior Fellow Yejin Choi discussed responsible AI model training at Davos, asking, “What if there could be an alternative form of intelligence that really learns … morals, human values from the get-go, as opposed to just training LLMs on the entirety of the internet, which actually includes the worst part of humanity, and then we then try to patch things up by doing ‘alignment’?” 

Stanford’s Yejin Choi & Axios’ Ina Fried
Axios
Jan 19, 2026
Media Mention

Axios chief technology correspondent Ina Fried speaks to HAI Senior Fellow Yejin Choi at Axios House in Davos during the World Economic Forum.

Media Mention
Your browser does not support the video tag.

Stanford’s Yejin Choi & Axios’ Ina Fried

Axios
Energy, EnvironmentMachine LearningGenerative AIEthics, Equity, InclusionJan 19

Axios chief technology correspondent Ina Fried speaks to HAI Senior Fellow Yejin Choi at Axios House in Davos during the World Economic Forum.

How AI Shook The World In 2025 And What Comes Next
CNN Business
Dec 30, 2025
Media Mention

HAI Co-Director James Landay and HAI Senior Fellow Erik Brynjolfsson discuss the impacts of AI in 2025 and the future of AI in 2026.

Media Mention
Your browser does not support the video tag.

How AI Shook The World In 2025 And What Comes Next

CNN Business
Industry, InnovationHuman ReasoningEnergy, EnvironmentDesign, Human-Computer InteractionGenerative AIWorkforce, LaborEconomy, MarketsDec 30

HAI Co-Director James Landay and HAI Senior Fellow Erik Brynjolfsson discuss the impacts of AI in 2025 and the future of AI in 2026.