Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
A New Law Designed for Children’s Internet Safety Will Change the Web for Adults, Too | Stanford HAI

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
news

A New Law Designed for Children’s Internet Safety Will Change the Web for Adults, Too

Date
October 25, 2022

The bill requires fundamental changes to platform design and personalization settings.

In September, California Governor Gavin Newsom signed AB-2273, the California Age Appropriate Design Code Act (Cal-AADC), into law. The bill, designed to protect minors on the internet, goes beyond today’s simple parental controls. It requires fundamental changes in the design of platforms that protect children’s online privacy while also mitigating harms such as bullying, exploitation, and inappropriate content. 

The law requires new default settings for children, tools for managing privacy preferences, impact assessments before new products are released, and adjustments to behavior manipulations designed to keep children using a product.

While directed toward children’s safety and well-being, the impact of the law could be much broader, says Stanford HAI Privacy and Data Policy Fellow Jennifer King. “This is focused on kids, but this is coming for adult audiences, too,” she says. “What we’re seeing is a shift toward a world where you get more choices over what and how you want things given to you that isn’t simply the company’s version of personalization.”

In this conversation, she explains the implications of the new law, how it will impact AI developers, and what happens next in the U.S. privacy and AI regulatory landscape. 

What will this new law actually do? 

This law shifts the baseline of defaults. It’s a feature of privacy by design. It’s similar to how Apple has implemented ad tracking transparency, where you now have to opt in to tracking within mobile apps, rather than opting out. Over 75 percent of Apple customers say no when asked whether they want to be tracked. This law is similar, although the baseline is not whether you are being asked if you want to be tracked — you’re just not being tracked. This new default will be for children under the age of 18 but could apply to any user of a website if the operator decides that offering that option is simpler than identifying the children who visit their website. 

While much of this new bill relates to privacy, there’s also a health and well-being portion based on an assumption of algorithmic manipulation. There is growing evidence that algorithmic systems, like news feeds and recommendation engines, have harmful effects. They breed addictive behaviors, particularly among vulnerable users such as children, as well as negatively impact mental health and self-esteem. This bill establishes that if your defaults cause these harms for children, then you have to change them. 

Companies can respond in different ways. They can elect to go the age verification route, for example, and prohibit children from visiting their site. Or they can shift all of their defaults and treat all visitors with the baseline defaults established by the law.

We saw a story last year that Instagram’s algorithm worsens young girls’ body image issues. So does this bill say Instagram can no longer use that algorithm? 

Instagram would likely be prohibited from using that algorithm to display content to children, given there is both research and anecdotal evidence that it causes harm. The company could bifurcate their product into an over-18 version and an under-18 version, and in that under-18 version, algorithms could not serve content in a way that perpetuates these harms. One solution, proposed by whistleblower Frances Haugen, is to move away from an engagement-based content display and to one organized by time, for example, that displays most recent posts by default. There are many potential design changes companies could weigh, from simple feature changes (such as turning off autoplay, which is required by the Children’s Code, a similar law in the UK) to rethinking how an algorithmic system that promotes well-being would function.

Does this mostly only impact big tech companies?

Generally, yes, unless your personal website directs content at children, or you know that a substantial number of your visitors are under the age of 18. I think EdTech [educational technology] sites and products will be impacted. 

This is California law and not federal. How will that add additional challenges to abiding by or enforcing it?

First, there will need to be a determination that the existing federal law, COPPA, does not preempt this law. COPPA, for instance, only covers children aged 13 and under. As I noted, a UK version of this law already exists, so I think that’s one reason why we haven’t seen much public pushback from the big platforms in the runup to the bill’s passage. The Googles and Metas have already had to comply with the UK Children’s Code for a year now. But for some businesses that are U.S.-focused, the Cal-AADC will be entirely new. So we might see some companies split their customers through IP filtering; for example, you could be in Nevada and I could be in California and we could be visiting the same website, but I’ll be presented with a California version and you will not. This is already happening today with CCPA compliance.

So what happens next with this law? 

It doesn’t go into effect until 2024, and it will go through a rulemaking process, which includes the opportunity for public comments. 

What’s the main takeaway here for you?

Similar to the UK Children’s Code, the Cal-AADC represents the first attempt to regulate algorithms specifically from the perspective of health and well-being. This signals a shift toward focusing on what I think of as “AI safety”: putting a burden on AI developers to demonstrate that an algorithm doesn’t harm people before it is launched. Perhaps you won’t have to demonstrate to an authority before you release an algorithm that it doesn’t cause harm, but you will be obligated to think of a way to present content that doesn’t exacerbate this set of harms that the Cal-AADC identifies. Very few of these online platforms considered at the product development phase what would be a safe and healthy experience for children. Now they will have to do so.

Stanford HAI's mission is to advance AI research, education, policy, and practice to improve the human condition. Learn more.

Share
Link copied to clipboard!
Authors
  • headshot
    Shana Lynch

Related News

The Evolution of Safety: Stanford’s Mykel Kochenderfer Explores Responsible AI in High-Stakes Environments
Scott Hadly
May 09, 2025
News

As AI technologies rapidly evolve, Professor Kochenderfer leads the charge in developing effective validation mechanisms to ensure safety in autonomous systems like vehicles and drones.

News

The Evolution of Safety: Stanford’s Mykel Kochenderfer Explores Responsible AI in High-Stakes Environments

Scott Hadly
Privacy, Safety, SecurityMay 09

As AI technologies rapidly evolve, Professor Kochenderfer leads the charge in developing effective validation mechanisms to ensure safety in autonomous systems like vehicles and drones.

How Stanford HAI Defines Human-Centered AI With Executive Director Russell Wald
Technovation
May 08, 2025
Media Mention

In this podcast, HAI Executive Director Russell Wald explores how universities, policymakers, and industry must collaborate to keep AI human-centered. Wald shares takeaways from the AI Index, explains how China is narrowing the performance gap, and outlines why academic institutions are vital to ethical AI leadership.

Media Mention
Your browser does not support the video tag.

How Stanford HAI Defines Human-Centered AI With Executive Director Russell Wald

Technovation
May 08

In this podcast, HAI Executive Director Russell Wald explores how universities, policymakers, and industry must collaborate to keep AI human-centered. Wald shares takeaways from the AI Index, explains how China is narrowing the performance gap, and outlines why academic institutions are vital to ethical AI leadership.

Ambient Intelligence, Human Impact
May 07, 2025
News

Health care providers struggle to catch early signals of cognitive decline. AI and computational neuroscientist Ehsan Adeli’s innovative computer vision tools may offer a solution.

News

Ambient Intelligence, Human Impact

HealthcareComputer VisionMay 07

Health care providers struggle to catch early signals of cognitive decline. AI and computational neuroscientist Ehsan Adeli’s innovative computer vision tools may offer a solution.