Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Policy Implications of DeepSeek AI’s Talent Base | Stanford HAI

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
policyPolicy Brief

Policy Implications of DeepSeek AI’s Talent Base

Date
May 06, 2025
Topics
International Affairs, International Security, International Development
Foundation Models
Workforce, Labor
Read Paper
abstract

This brief presents an analysis of Chinese AI startup DeepSeek’s talent base and calls for U.S. policymakers to reinvest in competing to attract and retain global AI talent.

In collaboration with

Key Takeaways

  • Chinese startup DeepSeek’s highly capable R1 and V3 models challenged prevailing beliefs about the United States’ advantage in AI innovation, but public debate focused more on the company’s training data and computing power than human talent.

  • We analyzed data on the 223 authors listed on DeepSeek’s five foundational technical research papers, including information on their research output, citations, and institutional affiliations, to identify notable talent patterns.

  • Nearly all of DeepSeek’s researchers were educated or trained in China, and more than half never left China for schooling or work. Of the quarter or so that did gain some experience in the United States, most returned to China to work on AI development there.

  • These findings challenge the core assumption that the United States holds a natural AI talent lead. Policymakers need to reinvest in competing to attract and retain the world’s best AI talent while bolstering STEM education to maintain competitiveness.

Executive Summary

Chinese startup DeekSeek AI upended the conventional wisdom about AI innovation. When it released its R1 language model and V3 general-purpose large language model (LLM) in January 2025, which demonstrated unprecedented reasoning capabilities, the company sent tremors through markets and challenged assumptions about American technological superiority.

Beyond debates about DeepSeek’s computation costs, the company’s breakthroughs speak to critical shifts in the ongoing global competition for AI talent. In our paper, “A Deep Peek into DeepSeek AI’s Talent and Implications for US Innovation,” we detail the educational backgrounds, career paths, and international mobility of more than 200 DeepSeek researchers. Nearly all of these researchers were educated or trained in China, more than half never left China for schooling or work, and of the nearly quarter that did gain some experience in the United States, most returned to China.

Policymakers should recognize these talent patterns as a serious challenge to U.S. technological leadership that export controls and computing investments alone cannot fully address. The success of DeepSeek should act as an early-warning signal that human capital—not just hardware or algorithms—plays a crucial role in geopolitics and that America’s talent advantage is diminishing.

Introduction

DeepSeek was founded in 2023 as an AI research company focused on developing “cost-efficient, high-performance language models.” Since then, the company has released five detailed technical research papers on the arxiv.org manuscript archive—posted between 2024 and 2025—with a total of 223 authors listed as contributors.

Relying on the OpenAlex research catalog, we pulled data on both the authors (publication records, citation metrics, and institutional affiliations dating back to 1989) and their institutions (geographical location, organization type, and research outputs metrics). We wrote custom Python scripts to parse the data and map each researcher’s complete institutional history, which includes insights into previously undetected patterns of cross-border movement. Our focus on talent movements over time, rather than on snapshots, enabled us to assess how talent pipelines have evolved. It also allowed us to zero in on phenomena like “reverse brain drain” cases—a key mechanism for strategic knowledge transfer that is of great relevance to the United States.

Read Paper
Share
Link copied to clipboard!
Authors
  • Amy Zegart headshot
    Amy Zegart
  • Emerson Johnston
    Emerson Johnston

Related Publications

Beyond DeepSeek: China's Diverse Open-Weight AI Ecosystem and Its Policy Implications
Caroline Meinhardt, Sabina Nong, Graham Webster, Tatsunori Hashimoto, Christopher Manning
Deep DiveDec 16, 2025
Issue Brief

Almost one year after the “DeepSeek moment,” this brief analyzes China’s diverse open-model ecosystem and examines the policy implications of their widespread global diffusion.

Issue Brief

Beyond DeepSeek: China's Diverse Open-Weight AI Ecosystem and Its Policy Implications

Caroline Meinhardt, Sabina Nong, Graham Webster, Tatsunori Hashimoto, Christopher Manning
Foundation ModelsInternational Affairs, International Security, International DevelopmentDeep DiveDec 16

Almost one year after the “DeepSeek moment,” this brief analyzes China’s diverse open-model ecosystem and examines the policy implications of their widespread global diffusion.

Moving Beyond the Term "Global South" in AI Ethics and Policy
Evani Radiya-Dixit, Angèle Christin
Quick ReadNov 19, 2025
Issue Brief

This brief examines the limitations of the term "Global South" in AI ethics and policy, and highlights the importance of grounding such work in specific regions and power structures.

Issue Brief

Moving Beyond the Term "Global South" in AI Ethics and Policy

Evani Radiya-Dixit, Angèle Christin
Ethics, Equity, InclusionInternational Affairs, International Security, International DevelopmentQuick ReadNov 19

This brief examines the limitations of the term "Global South" in AI ethics and policy, and highlights the importance of grounding such work in specific regions and power structures.

Yejin Choi’s Briefing to the United Nations Security Council
Yejin Choi
Quick ReadSep 24, 2025
Testimony

In this address, presented to the United Nations Security Council meeting on "Maintenance of International Peace and Security," Yejin Choi calls on the global scientific and policy communities to expand the AI frontier for all by pursuing intelligence that is not only powerful, but also accessible, robust, and efficient. She stresses the need to rethink our dependence on massive-scale data and computing resources from the outset, and design methods that do more with less — by building AI that is smaller and serves all communities.

Testimony

Yejin Choi’s Briefing to the United Nations Security Council

Yejin Choi
International Affairs, International Security, International DevelopmentQuick ReadSep 24

In this address, presented to the United Nations Security Council meeting on "Maintenance of International Peace and Security," Yejin Choi calls on the global scientific and policy communities to expand the AI frontier for all by pursuing intelligence that is not only powerful, but also accessible, robust, and efficient. She stresses the need to rethink our dependence on massive-scale data and computing resources from the outset, and design methods that do more with less — by building AI that is smaller and serves all communities.

Validating Claims About AI: A Policymaker’s Guide
Olawale Salaudeen, Anka Reuel, Angelina Wang, Sanmi Koyejo
Quick ReadSep 24, 2025
Policy Brief

This brief proposes a practical validation framework to help policymakers separate legitimate claims about AI systems from unsupported claims.

Policy Brief

Validating Claims About AI: A Policymaker’s Guide

Olawale Salaudeen, Anka Reuel, Angelina Wang, Sanmi Koyejo
Foundation ModelsPrivacy, Safety, SecurityQuick ReadSep 24

This brief proposes a practical validation framework to help policymakers separate legitimate claims about AI systems from unsupported claims.