Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Algorithms, Privacy, and the Future of Tech Regulation in California | Stanford HAI

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
news

Algorithms, Privacy, and the Future of Tech Regulation in California

Date
January 31, 2022

A recent Stanford-sponsored event offered insights at the intersection of regulation, innovation, and engagement.

What is the best approach to regulating a potentially harmful cutting-edge technology like AI while still encouraging innovation?

“We need to think about regulation at the right time in just the right amount,” says Jeremy Weinstein, Stanford professor of political science and co-author of the recent book System Error: Where Big Tech Went Wrong and How We Can Reboot. “We need to understand what regulatory models will get us to that Goldilocks-type outcome and engage more stakeholders in the process.”

Weinstein shared his thoughts at the recent Algorithms, Privacy, and the Future of Tech Regulation in California, a virtual conversation co-hosted by California 100, Stanford Institute for Economic Policy Research, and Stanford RegLab on January 18, 2022. Joining him were Jennifer Urban, board chair of the California Privacy Protection Agency and a clinical professor at UC Berkeley Law, and Ernestine Fu, a California 100 commissioner and venture partner at Alsop Louie. The panel discussion, moderated by California 100 Executive Director Karthick Ramakrishnan, covered technology regulation in California and beyond, examining harmful regulation-related beliefs and low consumer trust in technology.

Setting the Stage for Broader Regulation

Algorithms have proliferated as decision-making engines in domains from smart cities to bail setting. “But the quality of the data matters,” Fu says. Problematic data could lead to racial, gender, or other biases and serious social harms.

Often, algorithms are optimized for just one end, such as engagement in the case of social media platforms. But focusing on only one goal can lead to harmful side effects — misinformation regarding the COVID vaccine, for example.

The California Privacy Protection Agency (CPPA) — created through California’s Proposition 24 in 2020 as the U.S.’s first dedicated privacy agency — is working on rules to regulate algorithms and other technologies through data. “We’re attending to how consumers understand and make decisions about algorithm-based processes,” says CPPA chair Urban. The in-the-works rules would govern consumer rights as related to opting out of automated decision making and securing information about the logic behind such decisions, among other areas.

Kill the Regulate-Versus-Innovate Construct

Regulation always underlies markets, Weinstein says. It’s why we don’t get sick drinking milk, fall ill from a headache medicine, or live in unsafe housing.

However, “We have to do away with the binary notions like regulation versus innovation,” Weinstein adds. “It’s a false narrative that effective functioning of an innovative economy depends on there being zero regulation.”

Urban says that well-informed regulation can benefit businesses and consumers: “Regulation aims to provide guardrails, allowing a robust market to develop and businesses to flourish while reflecting the needs of consumers. Regulators need to understand the business models and whether their actions would be ‘breaking’ something in the industry.”

The speakers agreed that companies must do a better job of balancing their own interest with those of the broader public. That is, as regulators work to catch up with technology, businesses should work to cultivate clearer professional ethics around responsible AI and other areas.

Creating Trust with Control

Moving forward, companies must give people more discretion over how their personal data is collected and used.

“There’s a lack of trust with regard to companies and the government handling people’s personal data,” Urban says. “People don’t feel they have a real choice.”

The CPPA is trying to create more control for citizens — “but that requires allowing people to have access to companies’ information about them so they can make that choice,” Urban says.

California can be a test lab for how to build a future that balances the interests of corporations and citizens, Weinstein adds, but it won’t come from the state’s ballot system, which is too often influenced by a small number of wealthy players. Instead, it should come from companies and government engaging diverse stakeholders in key decisions and issues and more education for people making decisions about their data. “Even if people don’t know the technology, they can voice their values and concerns,” Urban says.

And technologists need to own problems arising from these tools and “not just hide from the threat of regulation,” Weinstein says. He points to Snapchat’s recent move into greater content moderation, such as that related to drug transactions.

In the end, “Our technological future is the responsibility not of CEOs or engineers, but our democracy,” Weinstein concludes. “People have been passive about technology’s impact on society. It’s time to exercise our democratic muscles more fully.”

Stanford HAI's mission is to advance AI research, education, policy and practice to improve the human condition. Learn more. 

Share
Link copied to clipboard!
Contributor(s)
Sachin Waikar

Related News

The Art of the Automated Negotiation
Matty Smith
Jun 18, 2025
News

Different AI agents have wildly different negotiation skills. If we outsource these tasks to agents, we may need to bring the "best" AI agent to the digital table.

News

The Art of the Automated Negotiation

Matty Smith
AutomationGenerative AIEconomy, MarketsJun 18

Different AI agents have wildly different negotiation skills. If we outsource these tasks to agents, we may need to bring the "best" AI agent to the digital table.

How Language Bias Persists in Scientific Publishing Despite AI Tools
Scott Hadly
Jun 16, 2025
News

Stanford researchers highlight the ongoing challenges of language discrimination in academic publishing, revealing that AI tools may not be the solution for non-native speakers.

News

How Language Bias Persists in Scientific Publishing Despite AI Tools

Scott Hadly
Ethics, Equity, InclusionGenerative AIJun 16

Stanford researchers highlight the ongoing challenges of language discrimination in academic publishing, revealing that AI tools may not be the solution for non-native speakers.

Exploring the Dangers of AI in Mental Health Care
Sarah Wells
Jun 11, 2025
News
Young woman holds up phone to her face

A new Stanford study reveals that AI therapy chatbots may not only lack effectiveness compared to human therapists but could also contribute to harmful stigma and dangerous responses.

News
Young woman holds up phone to her face

Exploring the Dangers of AI in Mental Health Care

Sarah Wells
HealthcareGenerative AIJun 11

A new Stanford study reveals that AI therapy chatbots may not only lack effectiveness compared to human therapists but could also contribute to harmful stigma and dangerous responses.