Peter Henderson
PhD Student in Computer Science; JD Candidate, Stanford Law School; AI Fellow, OpenPhilanthropy; Graduate Fellow, Regulation, Evaluation, and Governance Lab, Stanford University

PhD Student in Computer Science; JD Candidate, Stanford Law School; AI Fellow, OpenPhilanthropy; Graduate Fellow, Regulation, Evaluation, and Governance Lab, Stanford University
Foundation models are often trained on large volumes of copyrighted material. In the United States, AI researchers have long relied on fair use doctrine to avoid copyright issues with training data. However, our U.S. case law analysis in this brief highlights that fair use is not guaranteed for foundation models and that the risk of copyright infringement is real, though the exact extent remains uncertain. We argue that the United States needs a two-pronged approach to addressing these copyright issues—a mix of legal and technical mitigations that will allow us to harness the positive impact of foundation models while reducing intellectual property harms to creators.
Foundation models are often trained on large volumes of copyrighted material. In the United States, AI researchers have long relied on fair use doctrine to avoid copyright issues with training data. However, our U.S. case law analysis in this brief highlights that fair use is not guaranteed for foundation models and that the risk of copyright infringement is real, though the exact extent remains uncertain. We argue that the United States needs a two-pronged approach to addressing these copyright issues—a mix of legal and technical mitigations that will allow us to harness the positive impact of foundation models while reducing intellectual property harms to creators.
Courts will have to grapple with this new challenge, although scholars believe much of generative AI will be protected by the First Amendment.
Courts will have to grapple with this new challenge, although scholars believe much of generative AI will be protected by the First Amendment.
Researchers show that ChatGPT can be jailbroken with only 20 cents, but they are working on making this more difficult with “self-destructing models.”
Researchers show that ChatGPT can be jailbroken with only 20 cents, but they are working on making this more difficult with “self-destructing models.”
America is ready again to lead on AI—and it won’t just be American companies shaping the AI landscape if the White House has anything to say about it.
America is ready again to lead on AI—and it won’t just be American companies shaping the AI landscape if the White House has anything to say about it.
Meanwhile, the FTC tells AI companies to be careful with their hype, and class action lawsuits follow for unauthorized practice of law by AI.
Meanwhile, the FTC tells AI companies to be careful with their hype, and class action lawsuits follow for unauthorized practice of law by AI.
New Stanford tracker analyzes the 150 requirements of the White House Executive Order on AI and offers new insights into government priorities.
New Stanford tracker analyzes the 150 requirements of the White House Executive Order on AI and offers new insights into government priorities.
The Texas social media law moves forward, TikTok agrees to audits, and an EU proposal causes controversy over open-source models.
The Texas social media law moves forward, TikTok agrees to audits, and an EU proposal causes controversy over open-source models.
The U.S. continues to ramp up export controls on chips (and maybe soon AI), the first lawsuit is filed for software piracy, and more.
The U.S. continues to ramp up export controls on chips (and maybe soon AI), the first lawsuit is filed for software piracy, and more.
The San Francisco Police Department requests to use robots with deadly force, new complaints filed alleging algorithmic discrimination, and more.
The San Francisco Police Department requests to use robots with deadly force, new complaints filed alleging algorithmic discrimination, and more.
And last year’s trickle of AI-related court cases now looks more like a surge.
And last year’s trickle of AI-related court cases now looks more like a surge.
This brief examines the barriers to independent AI evaluation and proposes safe harbors to protect good-faith third-party research.
This brief examines the barriers to independent AI evaluation and proposes safe harbors to protect good-faith third-party research.
America is ready again to lead on AI—and it won’t just be American companies shaping the AI landscape if the White House has anything to say about it.
New Stanford tracker analyzes the 150 requirements of the White House Executive Order on AI and offers new insights into government priorities.
This brief underscores the safety risks inherent in custom fine-tuning of large language models.