Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Law, Policy, & AI Update: Killer Robots, Rent Lawsuit | Stanford HAI

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
news

Law, Policy, & AI Update: Killer Robots, Rent Lawsuit

Date
December 12, 2022

The San Francisco Police Department requests to use robots with deadly force, new complaints filed alleging algorithmic discrimination, and more.

The focus this month in AI law and policy is a string of complaints filed against a swath of AI systems used by companies. Each of these is filed alleging violations of different laws: the Texas Capture or Use of Biometric Identifier Act (CUBI), employment discrimination law, housing discrimination, and consumer protection law. 

These cases demonstrate the increasing surface area of legal theories that can be used to challenge potentially harmful uses of AI systems, but also shine a light on the increasing amount of due diligence that companies have to conduct before deploying an AI system in the real world.

Law

  • After ProPublica released an investigation into real-estate company RealPage, suggesting that the company uses algorithms to help landlords push up rent prices, several class action lawsuits have been filed and the DOJ is investigating the company.

  • Advocacy organization Real Women in Trucking has filed a complaint against Meta with the U.S. Equal Employment Opportunity Commission. The complaint alleges that Facebook’s ad targeting system significantly biases trucking job advertisements toward men. This is not long after Facebook entered into a settlement with the U.S. government for similar complaints about bias in its ad targeting system.

  • Texas recently passed the Capture or Use of Biometric Identifier Act (CUBI), which seeks to prevent noncompliant capture of biometric information, including biometric features embedded in machine learning models. As a result, the Texas Attorney General has filed a lawsuit against Meta and another one against Google. See some analysis here.

  • StockFish settles with ChessBase in a dispute over the use of GPLv3-licensed code. StockFish previously alleged that ChessBase used its chess engine and associated code, only swapping out some components like the neural network weights, without respecting the terms of the GPL license. Interestingly, the settlement appears to also cover new neural network weights that were included in ChessBase’s product that would interact with StockFish code in certain ways. The settlement states that neural networks offered by ChessBase for use with Stockfish “that are included in the compilation or dynamically loaded at runtime to initialize the data structures and logic of the Software must be subject to GPL-3.0 or a compatible license.”

  • UK regulator Ofcom is considering forcing social media companies to reveal their algorithms. This comes after an earlier study suggested that the majority of people receive their news through an intermediary (like social media companies) that can influence consumption of news and potentially drive polarization. 

Policy

  • The San Francisco Board of Supervisors approved the Police Department’s requested new policy that includes explosives-carrying robotics, stating that, “Robots will only be used as a deadly force option when risk of loss of life to members of the public or officers is imminent and outweighs any other force option available to SFPD.” A few days later the Board reversed its decision, sending it back to the rules committee for further debate.

  • The U.S.-EU Trade and Technology Council (TTC) released a joint statement which includes an AI roadmap to “inform our approaches to AI risk management and trustworthy AI on both sides of the Atlantic, and advance collaborative approaches in international standards bodies related to AI.” It would also seek to bring together AI experts to jointly address “challenges in key focus areas such as extreme weather and climate forecasting; health and medicine; electric grid optimization; agriculture optimization; and emergency response management.”

  • The UK Information Commissioner’s Office released a new report, “How to use AI and personal data appropriately and lawfully.”

  • The Australian Human Rights Commission and Actuaries Institute put out a new policy report to “help actuaries and insurers to comply with the federal anti-discrimination legislation when AI is used in pricing or underwriting insurance products.”

  • The U.S. Department of Energy is seeking comments on approaches “for accelerating innovations in emerging technologies to drive scientific discovery to sustainable production of new technologies” including AI. Comments are due Dec. 23, 2022. 

  • The U.S. Securities and Exchange Commission has posted a new rule that “would require [financial] advisers to conduct due diligence prior to engaging a service provider to perform certain services or functions.” The new rule specifically highlights certain AI scenarios and asks for comments, including to the following question: “[I]f an adviser is outsourcing to a service provider… that incorporates artificial intelligence into its services, should that adviser be required to confirm it has sufficient internal expertise to effectively oversee the service provider, and if not, obtain a third-party expert to provide such oversight?”

  • The U.S. National Labor Relations Board adds to the growing trend of agencies leveraging their specific statutory authority to regulate AI uses. In this case a memo describes curbing abuses of workplace surveillance and algorithmic employee management “through vigorously enforcing current law and by urging the Board to apply settled labor-law principles in a new framework.”

Legal Academia AI Roundup

  • Insuring AI: The Role of Insurance in Artificial Intelligence Regulation by Anat Lior. Taking a deep dive into how we should think about insuring AI systems and using insurance as a regulatory mechanism.

  • Glass Box Artificial Intelligence in Criminal Justice by Brandon L. Garrett and Cynthia Rudin. Arguing that interpretable AI can also be accurate and that there is not necessarily a trade-off between accuracy and interpretability, so we should push for interpretability for algorithms in criminal justice contexts.

  • Reclaiming Feudalism for the Technological Era by Shelly Kreiczer-Levy. Examining property law models for AI systems where users retain ownership over some part of the system (e.g., a car) while a company retains ownership over other parts (e.g., the autonomous driving system installed in the car).

—

Who am I? I’m a PhD (Machine Learning)-JD candidate at Stanford University and Stanford RegLab fellow (you can learn more about my research here). Each month I round up interesting news and events somewhere at the intersection of Law, Policy, and AI. Feel free to send me things that you think should be highlighted @PeterHndrsn. Also… just in case, none of this is legal advice, and any views I express here are purely my own and are not those of any entity, organization, government, or other person.

Stanford HAI's mission is to advance AI research, education, policy, and practice to improve the human condition. Learn more.

Share
Link copied to clipboard!
Authors
  • Peter Henderson
    Peter Henderson

Related News

A New Economic World Order May Be Based on Sovereign AI and Midsized Nation Alliances
Alex Pentland
Feb 06, 2026
News
close-up of a globe with pinpoints of lights coming out of all the countries

As trust in the old order erodes, mid-sized countries are building new agreements involving shared digital infrastructure and localized AI.

News
close-up of a globe with pinpoints of lights coming out of all the countries

A New Economic World Order May Be Based on Sovereign AI and Midsized Nation Alliances

Alex Pentland
Feb 06

As trust in the old order erodes, mid-sized countries are building new agreements involving shared digital infrastructure and localized AI.

Smart Enough to Do Math, Dumb Enough to Fail: The Hunt for a Better AI Test
Andrew Myers
Feb 02, 2026
News
illustration of data and lines

A Stanford HAI workshop brought together experts to develop new evaluation methods that assess AI's hidden capabilities, not just its test-taking performance.

News
illustration of data and lines

Smart Enough to Do Math, Dumb Enough to Fail: The Hunt for a Better AI Test

Andrew Myers
Foundation ModelsGenerative AIPrivacy, Safety, SecurityFeb 02

A Stanford HAI workshop brought together experts to develop new evaluation methods that assess AI's hidden capabilities, not just its test-taking performance.

What Davos Said About AI This Year
Shana Lynch
Jan 28, 2026
News
James Landay and Vanessa Parli

World leaders focused on ROI over hype this year, discussing sovereign AI, open ecosystems, and workplace change.

News
James Landay and Vanessa Parli

What Davos Said About AI This Year

Shana Lynch
Economy, MarketsJan 28

World leaders focused on ROI over hype this year, discussing sovereign AI, open ecosystems, and workplace change.