Skip to main content Skip to secondary navigation
Page Content

Law, Policy, & AI Update: Killer Robots, Rent Lawsuit

The San Francisco Police Department requests to use robots with deadly force, new complaints filed alleging algorithmic discrimination, and more.

Police Swat Officer Using a Mechanical Arm Bomb Disposal Robot Unit

A police swat officer uses a mechanical arm bomb disposal robot. Recently the San Francisco Police Department asked the city to approve a new policy that would include explosives-carrying robotics. 

The focus this month in AI law and policy is a string of complaints filed against a swath of AI systems used by companies. Each of these is filed alleging violations of different laws: the Texas Capture or Use of Biometric Identifier Act (CUBI), employment discrimination law, housing discrimination, and consumer protection law. 

These cases demonstrate the increasing surface area of legal theories that can be used to challenge potentially harmful uses of AI systems, but also shine a light on the increasing amount of due diligence that companies have to conduct before deploying an AI system in the real world.


  • After ProPublica released an investigation into real-estate company RealPage, suggesting that the company uses algorithms to help landlords push up rent prices, several class action lawsuits have been filed and the DOJ is investigating the company.
  • Advocacy organization Real Women in Trucking has filed a complaint against Meta with the U.S. Equal Employment Opportunity Commission. The complaint alleges that Facebook’s ad targeting system significantly biases trucking job advertisements toward men. This is not long after Facebook entered into a settlement with the U.S. government for similar complaints about bias in its ad targeting system.
  • Texas recently passed the Capture or Use of Biometric Identifier Act (CUBI), which seeks to prevent noncompliant capture of biometric information, including biometric features embedded in machine learning models. As a result, the Texas Attorney General has filed a lawsuit against Meta and another one against Google. See some analysis here.
  • StockFish settles with ChessBase in a dispute over the use of GPLv3-licensed code. StockFish previously alleged that ChessBase used its chess engine and associated code, only swapping out some components like the neural network weights, without respecting the terms of the GPL license. Interestingly, the settlement appears to also cover new neural network weights that were included in ChessBase’s product that would interact with StockFish code in certain ways. The settlement states that neural networks offered by ChessBase for use with Stockfish “that are included in the compilation or dynamically loaded at runtime to initialize the data structures and logic of the Software must be subject to GPL-3.0 or a compatible license.”
  • UK regulator Ofcom is considering forcing social media companies to reveal their algorithms. This comes after an earlier study suggested that the majority of people receive their news through an intermediary (like social media companies) that can influence consumption of news and potentially drive polarization. 


  • The San Francisco Board of Supervisors approved the Police Department’s requested new policy that includes explosives-carrying robotics, stating that, “Robots will only be used as a deadly force option when risk of loss of life to members of the public or officers is imminent and outweighs any other force option available to SFPD.” A few days later the Board reversed its decision, sending it back to the rules committee for further debate.
  • The U.S.-EU Trade and Technology Council (TTC) released a joint statement which includes an AI roadmap to “inform our approaches to AI risk management and trustworthy AI on both sides of the Atlantic, and advance collaborative approaches in international standards bodies related to AI.” It would also seek to bring together AI experts to jointly address “challenges in key focus areas such as extreme weather and climate forecasting; health and medicine; electric grid optimization; agriculture optimization; and emergency response management.”
  • The UK Information Commissioner’s Office released a new report, “How to use AI and personal data appropriately and lawfully.”
  • The Australian Human Rights Commission and Actuaries Institute put out a new policy report to “help actuaries and insurers to comply with the federal anti-discrimination legislation when AI is used in pricing or underwriting insurance products.”
  • The U.S. Department of Energy is seeking comments on approaches “for accelerating innovations in emerging technologies to drive scientific discovery to sustainable production of new technologies” including AI. Comments are due Dec. 23, 2022. 
  • The U.S. Securities and Exchange Commission has posted a new rule that “would require [financial] advisers to conduct due diligence prior to engaging a service provider to perform certain services or functions.” The new rule specifically highlights certain AI scenarios and asks for comments, including to the following question: “[I]f an adviser is outsourcing to a service provider… that incorporates artificial intelligence into its services, should that adviser be required to confirm it has sufficient internal expertise to effectively oversee the service provider, and if not, obtain a third-party expert to provide such oversight?”
  • The U.S. National Labor Relations Board adds to the growing trend of agencies leveraging their specific statutory authority to regulate AI uses. In this case a memo describes curbing abuses of workplace surveillance and algorithmic employee management “through vigorously enforcing current law and by urging the Board to apply settled labor-law principles in a new framework.”

Legal Academia AI Roundup

Who am I? I’m a PhD (Machine Learning)-JD candidate at Stanford University and Stanford RegLab fellow (you can learn more about my research here). Each month I round up interesting news and events somewhere at the intersection of Law, Policy, and AI. Feel free to send me things that you think should be highlighted @PeterHndrsn. Also… just in case, none of this is legal advice, and any views I express here are purely my own and are not those of any entity, organization, government, or other person.

Stanford HAI's mission is to advance AI research, education, policy, and practice to improve the human condition. Learn more.

More News Topics