Policy Implications of DeepSeek AI’s Talent Base

This brief presents an analysis of Chinese AI startup DeepSeek’s talent base and calls for U.S. policymakers to reinvest in competing to attract and retain global AI talent.
Key Takeaways
Chinese startup DeepSeek’s highly capable R1 and V3 models challenged prevailing beliefs about the United States’ advantage in AI innovation, but public debate focused more on the company’s training data and computing power than human talent.
We analyzed data on the 223 authors listed on DeepSeek’s five foundational technical research papers, including information on their research output, citations, and institutional affiliations, to identify notable talent patterns.
Nearly all of DeepSeek’s researchers were educated or trained in China, and more than half never left China for schooling or work. Of the quarter or so that did gain some experience in the United States, most returned to China to work on AI development there.
These findings challenge the core assumption that the United States holds a natural AI talent lead. Policymakers need to reinvest in competing to attract and retain the world’s best AI talent while bolstering STEM education to maintain competitiveness.
Executive Summary
Chinese startup DeekSeek AI upended the conventional wisdom about AI innovation. When it released its R1 language model and V3 general-purpose large language model (LLM) in January 2025, which demonstrated unprecedented reasoning capabilities, the company sent tremors through markets and challenged assumptions about American technological superiority.
Beyond debates about DeepSeek’s computation costs, the company’s breakthroughs speak to critical shifts in the ongoing global competition for AI talent. In our paper, “A Deep Peek into DeepSeek AI’s Talent and Implications for US Innovation,” we detail the educational backgrounds, career paths, and international mobility of more than 200 DeepSeek researchers. Nearly all of these researchers were educated or trained in China, more than half never left China for schooling or work, and of the nearly quarter that did gain some experience in the United States, most returned to China.
Policymakers should recognize these talent patterns as a serious challenge to U.S. technological leadership that export controls and computing investments alone cannot fully address. The success of DeepSeek should act as an early-warning signal that human capital—not just hardware or algorithms—plays a crucial role in geopolitics and that America’s talent advantage is diminishing.
Introduction
DeepSeek was founded in 2023 as an AI research company focused on developing “cost-efficient, high-performance language models.” Since then, the company has released five detailed technical research papers on the arxiv.org manuscript archive—posted between 2024 and 2025—with a total of 223 authors listed as contributors.
Relying on the OpenAlex research catalog, we pulled data on both the authors (publication records, citation metrics, and institutional affiliations dating back to 1989) and their institutions (geographical location, organization type, and research outputs metrics). We wrote custom Python scripts to parse the data and map each researcher’s complete institutional history, which includes insights into previously undetected patterns of cross-border movement. Our focus on talent movements over time, rather than on snapshots, enabled us to assess how talent pipelines have evolved. It also allowed us to zero in on phenomena like “reverse brain drain” cases—a key mechanism for strategic knowledge transfer that is of great relevance to the United States.