Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Policy Implications of DeepSeek AI’s Talent Base | Stanford HAI

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
policyPolicy Brief

Policy Implications of DeepSeek AI’s Talent Base

Date
May 06, 2025
Topics
International Affairs, International Security, International Development
Foundation Models
Workforce, Labor
Read Paper
abstract

This brief presents an analysis of Chinese AI startup DeepSeek’s talent base and calls for U.S. policymakers to reinvest in competing to attract and retain global AI talent.

Key Takeaways

  • Chinese startup DeepSeek’s highly capable R1 and V3 models challenged prevailing beliefs about the United States’ advantage in AI innovation, but public debate focused more on the company’s training data and computing power than human talent.

  • We analyzed data on the 223 authors listed on DeepSeek’s five foundational technical research papers, including information on their research output, citations, and institutional affiliations, to identify notable talent patterns.

  • Nearly all of DeepSeek’s researchers were educated or trained in China, and more than half never left China for schooling or work. Of the quarter or so that did gain some experience in the United States, most returned to China to work on AI development there.

  • These findings challenge the core assumption that the United States holds a natural AI talent lead. Policymakers need to reinvest in competing to attract and retain the world’s best AI talent while bolstering STEM education to maintain competitiveness.

Executive Summary

Chinese startup DeekSeek AI upended the conventional wisdom about AI innovation. When it released its R1 language model and V3 general-purpose large language model (LLM) in January 2025, which demonstrated unprecedented reasoning capabilities, the company sent tremors through markets and challenged assumptions about American technological superiority.

Beyond debates about DeepSeek’s computation costs, the company’s breakthroughs speak to critical shifts in the ongoing global competition for AI talent. In our paper, “A Deep Peek into DeepSeek AI’s Talent and Implications for US Innovation,” we detail the educational backgrounds, career paths, and international mobility of more than 200 DeepSeek researchers. Nearly all of these researchers were educated or trained in China, more than half never left China for schooling or work, and of the nearly quarter that did gain some experience in the United States, most returned to China.

Policymakers should recognize these talent patterns as a serious challenge to U.S. technological leadership that export controls and computing investments alone cannot fully address. The success of DeepSeek should act as an early-warning signal that human capital—not just hardware or algorithms—plays a crucial role in geopolitics and that America’s talent advantage is diminishing.

Introduction

DeepSeek was founded in 2023 as an AI research company focused on developing “cost-efficient, high-performance language models.” Since then, the company has released five detailed technical research papers on the arxiv.org manuscript archive—posted between 2024 and 2025—with a total of 223 authors listed as contributors.

Relying on the OpenAlex research catalog, we pulled data on both the authors (publication records, citation metrics, and institutional affiliations dating back to 1989) and their institutions (geographical location, organization type, and research outputs metrics). We wrote custom Python scripts to parse the data and map each researcher’s complete institutional history, which includes insights into previously undetected patterns of cross-border movement. Our focus on talent movements over time, rather than on snapshots, enabled us to assess how talent pipelines have evolved. It also allowed us to zero in on phenomena like “reverse brain drain” cases—a key mechanism for strategic knowledge transfer that is of great relevance to the United States.

Read Paper
Share
Link copied to clipboard!
Authors
  • Amy Zegart headshot
    Amy Zegart
  • Emerson Johnston
    Emerson Johnston

Related Publications

Mind the (Language) Gap: Mapping the Challenges of LLM Development in Low-Resource Language Contexts
Juan Pava, Haifa Badi Uz Zaman, Caroline Meinhardt, Toni Friedman, Sang T. Truong, Daniel Zhang, Elena Cryst, Vukosi Marivate, Sanmi Koyejo
Deep DiveApr 22, 2025
White Paper

This white paper maps the LLM development landscape for low-resource languages, highlighting challenges, trade-offs, and strategies to increase investment; prioritize cross-disciplinary, community-driven development; and ensure fair data ownership.

White Paper

Mind the (Language) Gap: Mapping the Challenges of LLM Development in Low-Resource Language Contexts

Juan Pava, Haifa Badi Uz Zaman, Caroline Meinhardt, Toni Friedman, Sang T. Truong, Daniel Zhang, Elena Cryst, Vukosi Marivate, Sanmi Koyejo
International Affairs, International Security, International DevelopmentNatural Language ProcessingDeep DiveApr 22

This white paper maps the LLM development landscape for low-resource languages, highlighting challenges, trade-offs, and strategies to increase investment; prioritize cross-disciplinary, community-driven development; and ensure fair data ownership.

What Makes a Good AI Benchmark?
Anka Reuel, Amelia Hardy, Chandler Smith, Max Lamparth, Malcolm Hardy, Mykel Kochenderfer
Dec 11, 2024
Policy Brief
What Makes a Good AI Benchmark

This brief presents a novel assessment framework for evaluating the quality of AI benchmarks and scores 24 benchmarks against the framework.

Policy Brief
What Makes a Good AI Benchmark

What Makes a Good AI Benchmark?

Anka Reuel, Amelia Hardy, Chandler Smith, Max Lamparth, Malcolm Hardy, Mykel Kochenderfer
Foundation ModelsPrivacy, Safety, SecurityDec 11

This brief presents a novel assessment framework for evaluating the quality of AI benchmarks and scores 24 benchmarks against the framework.

Response to U.S. AI Safety Institute’s Request for Comment on Managing Misuse Risk For Dual-Use Foundation Models
Rishi Bommasani, Alexander Wan, Yifan Mai, Percy Liang, Daniel E. Ho
Sep 09, 2024
Response to Request

In this response to the U.S. AI Safety Institute’s (US AISI) request for comment on its draft guidelines for managing the misuse risk for dual-use foundation models, scholars from Stanford HAI, the Center for Research on Foundation Models (CRFM), and the Regulation, Evaluation, and Governance Lab (RegLab) urge the US AISI to strengthen its guidance on reproducible evaluations and third- party evaluations, as well as clarify guidance on post-deployment monitoring. They also encourage the institute to develop similar guidance for other actors in the foundation model supply chain and for non-misuse risks, while ensuring the continued open release of foundation models absent evidence of marginal risk.

Response to Request

Response to U.S. AI Safety Institute’s Request for Comment on Managing Misuse Risk For Dual-Use Foundation Models

Rishi Bommasani, Alexander Wan, Yifan Mai, Percy Liang, Daniel E. Ho
Regulation, Policy, GovernanceFoundation ModelsPrivacy, Safety, SecuritySep 09

In this response to the U.S. AI Safety Institute’s (US AISI) request for comment on its draft guidelines for managing the misuse risk for dual-use foundation models, scholars from Stanford HAI, the Center for Research on Foundation Models (CRFM), and the Regulation, Evaluation, and Governance Lab (RegLab) urge the US AISI to strengthen its guidance on reproducible evaluations and third- party evaluations, as well as clarify guidance on post-deployment monitoring. They also encourage the institute to develop similar guidance for other actors in the foundation model supply chain and for non-misuse risks, while ensuring the continued open release of foundation models absent evidence of marginal risk.

How Persuasive is AI-Generated Propaganda?
Josh A. Goldstein, Jason Chao, Shelby Grossman, Alex Stamos, Michael Tomz
Sep 03, 2024
Policy Brief

This brief presents the findings of an experiment that measures how persuasive AI-generated propaganda is compared to foreign propaganda articles written by humans.

Policy Brief

How Persuasive is AI-Generated Propaganda?

Josh A. Goldstein, Jason Chao, Shelby Grossman, Alex Stamos, Michael Tomz
DemocracyFoundation ModelsSep 03

This brief presents the findings of an experiment that measures how persuasive AI-generated propaganda is compared to foreign propaganda articles written by humans.