Skip to main content Skip to secondary navigation
Page Content

Law, Policy, & AI Update: China Requires AI Watermarks, ChatGPT Won’t Make it to U.S. Courtrooms

And last year’s trickle of AI-related court cases now looks more like a surge.

Image
Four AI generated pop-art flags of China

DALL-E

A new rules in China requires watermarks or other identifiers for AI-generated media.

In the last couple of months, artificial intelligence has been facing the legal system on many fronts as it is deployed in products across the globe. The United States Supreme Court is grappling with whether algorithmic recommendations should receive Section 230 protections, courts in the United States and United Kingdom are examining whether large-scale training of foundation models complies with copyright and contract law, and the list goes on. The legal system will have to rapidly resolve many AI-related issues over the next few years, and the results of those decisions will affect the landscape of AI development.

Law

  • China enacted new regulations on deep-synthesis technology, including deepfakes and other types of generative AI systems in January, issued by the Cyberspace Administration. The rule places significant restrictions on AI-generated media, including the requirement of carrying identifiers, like watermarks. This comes as several new tools (including ones from OpenAI and Stanford, among others) launched to identify AI-generated content, which could be used for cheating on exams, spreading disinformation, and more. 
  • The Department of Consumer and Worker Protection (DCWP) in New York postponed enforcement of a law targeting AI bias until April 15, 2023. According to the department, due to the high volume of public comments, they will be holding a second hearing to address concerns.
  • In Canada, a class action lawsuit is moving forward against Meta, alleging employment and housing discrimination through its ads practices. This comes after Meta and the U.S. Department of Justice have agreed to a resolution modifying ad targeting algorithms to be more compliant with antidiscrimination law.
  • Litigation has been filed against Stability AI and Midjourney for their image generation foundation models. In particular, in the United States there is a class action litigation lawsuit against both companies alleging copyright infringement among other claims. Getty Images has also initiated litigation against Stability AI for copyright infringement and trademark infringement among other claims. Getty Images also filed suit in the United Kingdom against Stability AI.
  • Dozens of amicus briefs have been filed in the cases of Gonzalez v. Google currently before the United States Supreme Court. Many argue that algorithmic recommendations should be covered by Section 230 protections, which generally provide platforms with immunity from liability with respect to third-party content.
  • The CEO of DoNotPay wanted to use ChatGPT in the courtroom, but the bar association blocked this move, stating that it would be unauthorized practice of law, with potential jail time attached. This demonstrates the careful balance between rapidly deploying new technologies and respecting regulations. A recent report from Stanford Law School describes some efforts on reforms within the legal profession to help use legal tech for access to justice.
  • Brazil’s parliament is considering a new draft AI law that would provide certain rights to users, like the right to explanation or challenge.

Policy

  • The White House has put out a Roadmap for Researchers on Priorities Related to Information Integrity Research and Development. The goal of the roadmap is “to share Federal Government research priorities in the area of information integrity, focusing on expanding access to high-integrity information while minimizing harm.”
  • The National AI Research Resource Task Force published their final report suggesting $2.6B in spending on a national AI research computing infrastructure.
  • The National Institute of Standards and Technology (NIST) has published its AI risk management framework (RMF), covering an extensive array of guidelines and safety procedures for the safe use of AI. They are also seeking feedback on a companion AI RMF Playbook through February 27, 2023.  
  • The Commission nationale de l'informatique et des libertés––France’s data protection authority—is launching a new department focusing on artificial intelligence. “The five-person team will guide the CNIL's understanding of AI matters, including AI system functions, associated privacy risks and preparations for the proposed EU AI Act.” The regulator also announced a study of machine-learning databases focused on producing practical resources for handling and use.
  • The Equal Employment Opportunity Commission released the Draft Strategic Enforcement Plan, aiming to curtail the use of automated systems in hiring, including AI and machine learning. The report is open for public comments through February. 
  • The National Telecommunications and Information Administration in the United States is asking for comments addressing issues at the intersection of privacy, equity, and civil rights, including the use of data for AI.
  • The White House is reportedly considering further restrictions on investments around emerging technologies like AI within China. This would build on recent restrictions like export controls on AI-related hardware.

Who am I? I’m a PhD (Machine Learning)-JD candidate at Stanford University and Stanford RegLab fellow (you can learn more about my research here). Each month I round up interesting news and events somewhere at the intersection of Law, Policy, and AI. Feel free to send me things that you think should be highlighted @PeterHndrsn. Also… just in case, none of this is legal advice, and any views I express here are purely my own and are not those of any entity, organization, government, or other person.

Stanford HAI’s mission is to advance AI research, education, policy and practice to improve the human condition. Learn more.

More News Topics