Skip to main content Skip to secondary navigation
Page Content

The Law, Policy, & AI Briefing: September 2022

The Texas social media law moves forward, TikTok agrees to audits, and an EU proposal causes controversy over open-source models.

Image
Photo of the Texas capitol

Welcome to the latest edition of the Law, Policy, and AI Briefing! This past month, we saw several major developments. Of particular note is the increasing attention that the risks of open-source foundation models are getting. The EU is considering regulating these readily-available models, which has sparked a debate about how these models should be treated. Again it falls on whether the transparency of open-source is more beneficial than the potential harms of making the models widely available. The reality is that we need better mechanisms to prevent harmful dual-uses from foundation models. Recent approaches have tried to use licensing schemes to ban harmful uses. In our own work, we proposed trying to modify neural networks to prevent harmful uses. And the U.S. banned A100 and H100 GPU sales to China to prevent military uses. But none of these are fool-proof. As regulators gear up to consider how to handle the models, it will be important for researchers to engage in the policy discussion to find approaches that promote transparency while mitigating risks.

Law

  • The Fifth Circuit decided that a Texas law targeting social media can go into effect. The bill, HB 20, states that “A social media platform may not censor a user, a user’s expression, or a user’s ability to receive the expression of another person based on: (1) the viewpoint of the user or another person; (2) the viewpoint represented in the user’s expression or another person’s expression; or (3) a user’s geographic location in this state or any part of this state.” This was a surprising ruling to many legal scholars since it has been generally accepted that these companies can engage in content moderation and express their own freedom of speech in doing so. In recent discussion, Prof. Daphne Keller has suggested that platforms might be able to side-step this by asking users to opt into content moderation. But if companies do not take this approach, they will be forced to figure out how to build AI content moderation systems that comply with this law while keeping harmful content off of their websites. This is a perhaps impossible feat, though.
  • “The Biden administration and TikTok have drafted a preliminary agreement to resolve national security concerns posed by the Chinese-owned video app…” Importantly for algorithmic oversight, as part of this deal, “Oracle is expected to monitor TikTok’s powerful algorithms that determine the content that the app recommends, in response to concerns that the Chinese government could use its feed as a way to influence the American public.” It is unclear exactly how this audit process will work yet. But a TikTok spokesperson stated that the audits “will ensure that content continues to be flagged and actioned appropriately based on our Community Guidelines and no other factors.”
  • In December 2021, Attorney General Karl A. Racine introduced “Legislation to Stop Discrimination In Automated Decision-Making Tools That Impact Individuals' Daily Lives”. On September 22, 2022, public hearings were held that discussed the merits and problems with the bill. The bill would ban discriminatory conduct by algorithms except for legally approved affirmative action plans. It also contains transparency and auditing requirements for applicable algorithms. At the hearing last month, various groups (like EPIC) urged the council to pass the bill. However, many raised concerns about the wording of the bill. In particular, some raised concerns that the focus on particular demographic features would not prevent discrimination since proxy features could still be used to discriminate. Others raised concerns that the bill did not properly define the scope of covered algorithms and entities. Finally, participants pointed to the fact that there are no settled definitions of assessing if an algorithm is discriminatory in the machine learning research community, making it unclear what constitutes a satisfactory audit.
  • An investor lawsuit blames T-Mobile data breaches on its AI system. Of course, while the AI system is somewhat tangential to the lawsuit, it highlights an interesting consideration. With the desire to aggregate data in one consolidated location to boost the performance of AI systems, there are increased security risks (which can lead to legal exposure such as in this case).
  • Getty Images bans AI content citing legal concerns. The legality of foundation models is murky territory since they are typically trained on a large quantity of copyrighted data. While many have argued that training machine learning models in this way is protected by fair use, there are many nuances to these arguments when considering the content generated by the models themselves. It seems that websites have opted not to be the test case for a potential lawsuit to resolve this issue.
  • A new proposal for the EU AI Act will likely impact open-source foundation models and there is an ongoing debate about it. Essentially one perspective is that imposing burdens on the release of foundation models will result in more closed-source development, which could be perceived as a bad thing. The other perspective is that these models should be regulated due to their inherent dangers.
  • Program your autonomous robot to be aware of crime scenes or it might roll right through one! I’m not sure if there’s any liability here, but a food delivery robot rolled through caution tape at a crime scene.
  • “U.S. officials order Nvidia to halt sales of top AI chips to China.” Scholars hoping to prevent harmful uses of AI have long suggested such a move. See, e.g., Flynn (2022); Fedasiuk, Elmgren, Lu (2022).

Policy

  • The National AI Advisory Committee (NAIAC) will meet again on October 12th and 13th, 2022 at Stanford University. Registration link is here.
  • How should we handle the use of computer vision for publicly accessible surveillance? Two recent developments have brought about this underlying question. In one case, a computer vision program was used to assess how much time lawmakers spent on their phones through footage. In another case an artist took instagram photos and used publicly accessible camera footage to show the instagrammers taking the posted photo. Interestingly, the company hosting the camera footage issued a takedown request citing copyright infringement of the footage. 
  • How should we handle GPT-generated assignments? A Reddit post claiming to be a student who used GPT to get straight A’s has sparked discussion on how to set policies surrounding the use of these systems. One interesting proposal from Prof. Tiffany Li argues that AI is now a fundamental tool that students will use in their daily lives going forward and it is important to teach them how to do so. 
  • The NBER Economics of Artificial Intelligence conference took place. I won’t attempt to summarize all of the great papers that are definitely worth checking out, but will highlight a few that are particularly relevant for those interested in law and policy. One work appears to show that AI drives some innovation and growth, but more so in ex ante larger firms. To my mind, this is perhaps unsurprising, but certainly has implications for the effects of AI on market concentration and competition. Another work examines bias-variance trade-offs, suggesting that firms facing competition will favor biased models while monopolistic firms will prefer variance-reducing algorithms.
  • The U.S. Food and Drug Administration is kicking off the third phase of its AI seafood screening program. An interesting use of AI in federal agencies!
  • “The Department of Transportation is interested in receiving comments on the possibility of adapting existing and emerging automation technologies to accelerate the development of real-time roadway intersection safety and warning systems for both drivers and [vulnerable road users] in a cost-effective manner that will facilitate deployment at scale.” This is a chance for machine learning researchers to weigh in by submitting a comment.
  • Intelligence Advanced Research Projects Activity (IARPA) has a program aimed at authorship attribution using machine learning, effectively de-identifying anonymous authors on the internet. This creates obvious privacy concerns and also concerns surrounding false-positives. But it also has a privacy team that aims to thwart the authorship attribution mechanism. 

Legal Academia AI Roundup

  • Jane Campbell Moriarty & Erin McCluan, “Forward to the Symposium, The Death of Eyewitness Testimony and the Rise of Machine”. Increasingly evidence in court goes from eyewitness testimony to machine-generated evidence (e.g., facial recognition matching scores). This forward to a symposium on the matter introduces relevant work in the area and discusses the challenges with this transition.
  • Sonia Gipson Rankin, “The MiDAS Touch: Atuahene’s “Stategraft” and the Implications of Unregulated Artificial Intelligence.” This paper examines the Michigan Integrated Data Automated System (“MiDAS”), which was part of the reason that the state erroneously charged over 37,000 people with unemployment fraud. The paper compares this to corrupt state practices that has been dubbed as “stategraft.”
  • Leon G. Ho, “Countering Personalized Speech.” This paper focuses on a roadmap that would provide users with more control over the content that they see online. The paper gives “several proposals along key regulatory modalities to move end-user personalization towards more robust ex ante capabilities that also filter by content type and characteristics, rather than just ad hoc filters on specific pieces of content and content creators.”
  • Han-Wei Ho, Patrick Chung-Chia Huang, and Yun-chien Chang, “Machine Learning Comparative Law.” This paper examines machine learning tools to help with the empirical analysis of law across domains and jurisdictions (so-called comparative law).
  • Gary E. Marchant, “Swords and Shields: Impact of Private Standards in Technology-Based Liability.” Private standards have significant influence on the course of autonomous systems. How should companies implementing these standards be treated in terms of liability? This paper examines this question among others. 

Who am I? I’m a PhD (Machine Learning)-JD candidate at Stanford University and Stanford RegLab fellow (you can learn more about my research here). RegLab logoEach month I round up interesting news and events somewhere at the intersection of Law, Policy, and AI. Feel free to send me things that you think should be highlighted @PeterHndrsn. Also… just in case, none of this is legal advice, and any views I express here are purely my own and are not those of any entity, organization, government, or other person.

Stanford HAI's mission is to advance AI research, education, policy, and practice to improve the human condition. Learn more.

More News Topics