Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Law, Policy, & AI Update: China Requires AI Watermarks, ChatGPT Won’t Make it to U.S. Courtrooms | Stanford HAI

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
news

Law, Policy, & AI Update: China Requires AI Watermarks, ChatGPT Won’t Make it to U.S. Courtrooms

Date
February 06, 2023
DALL-E

And last year’s trickle of AI-related court cases now looks more like a surge.

In the last couple of months, artificial intelligence has been facing the legal system on many fronts as it is deployed in products across the globe. The United States Supreme Court is grappling with whether algorithmic recommendations should receive Section 230 protections, courts in the United States and United Kingdom are examining whether large-scale training of foundation models complies with copyright and contract law, and the list goes on. The legal system will have to rapidly resolve many AI-related issues over the next few years, and the results of those decisions will affect the landscape of AI development.

Law

  • China enacted new regulations on deep-synthesis technology, including deepfakes and other types of generative AI systems in January, issued by the Cyberspace Administration. The rule places significant restrictions on AI-generated media, including the requirement of carrying identifiers, like watermarks. This comes as several new tools (including ones from OpenAI and Stanford, among others) launched to identify AI-generated content, which could be used for cheating on exams, spreading disinformation, and more. 

  • The Department of Consumer and Worker Protection (DCWP) in New York postponed enforcement of a law targeting AI bias until April 15, 2023. According to the department, due to the high volume of public comments, they will be holding a second hearing to address concerns.

  • In Canada, a class action lawsuit is moving forward against Meta, alleging employment and housing discrimination through its ads practices. This comes after Meta and the U.S. Department of Justice have agreed to a resolution modifying ad targeting algorithms to be more compliant with antidiscrimination law.

  • Litigation has been filed against Stability AI and Midjourney for their image generation foundation models. In particular, in the United States there is a class action litigation lawsuit against both companies alleging copyright infringement among other claims. Getty Images has also initiated litigation against Stability AI for copyright infringement and trademark infringement among other claims. Getty Images also filed suit in the United Kingdom against Stability AI.

  • Dozens of amicus briefs have been filed in the cases of Gonzalez v. Google currently before the United States Supreme Court. Many argue that algorithmic recommendations should be covered by Section 230 protections, which generally provide platforms with immunity from liability with respect to third-party content.

  • The CEO of DoNotPay wanted to use ChatGPT in the courtroom, but the bar association blocked this move, stating that it would be unauthorized practice of law, with potential jail time attached. This demonstrates the careful balance between rapidly deploying new technologies and respecting regulations. A recent report from Stanford Law School describes some efforts on reforms within the legal profession to help use legal tech for access to justice.

  • Brazil’s parliament is considering a new draft AI law that would provide certain rights to users, like the right to explanation or challenge.

Policy

  • The White House has put out a Roadmap for Researchers on Priorities Related to Information Integrity Research and Development. The goal of the roadmap is “to share Federal Government research priorities in the area of information integrity, focusing on expanding access to high-integrity information while minimizing harm.”

  • The National AI Research Resource Task Force published their final report suggesting $2.6B in spending on a national AI research computing infrastructure.

  • The National Institute of Standards and Technology (NIST) has published its AI risk management framework (RMF), covering an extensive array of guidelines and safety procedures for the safe use of AI. They are also seeking feedback on a companion AI RMF Playbook through February 27, 2023.  

  • The Commission nationale de l'informatique et des libertés––France’s data protection authority—is launching a new department focusing on artificial intelligence. “The five-person team will guide the CNIL's understanding of AI matters, including AI system functions, associated privacy risks and preparations for the proposed EU AI Act.” The regulator also announced a study of machine-learning databases focused on producing practical resources for handling and use.

  • The Equal Employment Opportunity Commission released the Draft Strategic Enforcement Plan, aiming to curtail the use of automated systems in hiring, including AI and machine learning. The report is open for public comments through February. 

  • The National Telecommunications and Information Administration in the United States is asking for comments addressing issues at the intersection of privacy, equity, and civil rights, including the use of data for AI.

  • The White House is reportedly considering further restrictions on investments around emerging technologies like AI within China. This would build on recent restrictions like export controls on AI-related hardware.

—

Who am I? I’m a PhD (Machine Learning)-JD candidate at Stanford University and Stanford RegLab fellow (you can learn more about my research here). Each month I round up interesting news and events somewhere at the intersection of Law, Policy, and AI. Feel free to send me things that you think should be highlighted @PeterHndrsn. Also… just in case, none of this is legal advice, and any views I express here are purely my own and are not those of any entity, organization, government, or other person.

Stanford HAI’s mission is to advance AI research, education, policy and practice to improve the human condition. Learn more.

DALL-E
Share
Link copied to clipboard!
Authors
  • Peter Henderson
    Peter Henderson

Related News

A New Economic World Order May Be Based on Sovereign AI and Midsized Nation Alliances
Alex Pentland
Feb 06, 2026
News
close-up of a globe with pinpoints of lights coming out of all the countries

As trust in the old order erodes, mid-sized countries are building new agreements involving shared digital infrastructure and localized AI.

News
close-up of a globe with pinpoints of lights coming out of all the countries

A New Economic World Order May Be Based on Sovereign AI and Midsized Nation Alliances

Alex Pentland
Feb 06

As trust in the old order erodes, mid-sized countries are building new agreements involving shared digital infrastructure and localized AI.

Smart Enough to Do Math, Dumb Enough to Fail: The Hunt for a Better AI Test
Andrew Myers
Feb 02, 2026
News
illustration of data and lines

A Stanford HAI workshop brought together experts to develop new evaluation methods that assess AI's hidden capabilities, not just its test-taking performance.

News
illustration of data and lines

Smart Enough to Do Math, Dumb Enough to Fail: The Hunt for a Better AI Test

Andrew Myers
Foundation ModelsGenerative AIPrivacy, Safety, SecurityFeb 02

A Stanford HAI workshop brought together experts to develop new evaluation methods that assess AI's hidden capabilities, not just its test-taking performance.

What Davos Said About AI This Year
Shana Lynch
Jan 28, 2026
News
James Landay and Vanessa Parli

World leaders focused on ROI over hype this year, discussing sovereign AI, open ecosystems, and workplace change.

News
James Landay and Vanessa Parli

What Davos Said About AI This Year

Shana Lynch
Economy, MarketsJan 28

World leaders focused on ROI over hype this year, discussing sovereign AI, open ecosystems, and workplace change.