Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Algorithms, Privacy, and the Future of Tech Regulation in California | Stanford HAI
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

news

Algorithms, Privacy, and the Future of Tech Regulation in California

Date
January 31, 2022

A recent Stanford-sponsored event offered insights at the intersection of regulation, innovation, and engagement.

What is the best approach to regulating a potentially harmful cutting-edge technology like AI while still encouraging innovation?

“We need to think about regulation at the right time in just the right amount,” says Jeremy Weinstein, Stanford professor of political science and co-author of the recent book System Error: Where Big Tech Went Wrong and How We Can Reboot. “We need to understand what regulatory models will get us to that Goldilocks-type outcome and engage more stakeholders in the process.”

Weinstein shared his thoughts at the recent Algorithms, Privacy, and the Future of Tech Regulation in California, a virtual conversation co-hosted by California 100, Stanford Institute for Economic Policy Research, and Stanford RegLab on January 18, 2022. Joining him were Jennifer Urban, board chair of the California Privacy Protection Agency and a clinical professor at UC Berkeley Law, and Ernestine Fu, a California 100 commissioner and venture partner at Alsop Louie. The panel discussion, moderated by California 100 Executive Director Karthick Ramakrishnan, covered technology regulation in California and beyond, examining harmful regulation-related beliefs and low consumer trust in technology.

Setting the Stage for Broader Regulation

Algorithms have proliferated as decision-making engines in domains from smart cities to bail setting. “But the quality of the data matters,” Fu says. Problematic data could lead to racial, gender, or other biases and serious social harms.

Often, algorithms are optimized for just one end, such as engagement in the case of social media platforms. But focusing on only one goal can lead to harmful side effects — misinformation regarding the COVID vaccine, for example.

The California Privacy Protection Agency (CPPA) — created through California’s Proposition 24 in 2020 as the U.S.’s first dedicated privacy agency — is working on rules to regulate algorithms and other technologies through data. “We’re attending to how consumers understand and make decisions about algorithm-based processes,” says CPPA chair Urban. The in-the-works rules would govern consumer rights as related to opting out of automated decision making and securing information about the logic behind such decisions, among other areas.

Kill the Regulate-Versus-Innovate Construct

Regulation always underlies markets, Weinstein says. It’s why we don’t get sick drinking milk, fall ill from a headache medicine, or live in unsafe housing.

However, “We have to do away with the binary notions like regulation versus innovation,” Weinstein adds. “It’s a false narrative that effective functioning of an innovative economy depends on there being zero regulation.”

Urban says that well-informed regulation can benefit businesses and consumers: “Regulation aims to provide guardrails, allowing a robust market to develop and businesses to flourish while reflecting the needs of consumers. Regulators need to understand the business models and whether their actions would be ‘breaking’ something in the industry.”

The speakers agreed that companies must do a better job of balancing their own interest with those of the broader public. That is, as regulators work to catch up with technology, businesses should work to cultivate clearer professional ethics around responsible AI and other areas.

Creating Trust with Control

Moving forward, companies must give people more discretion over how their personal data is collected and used.

“There’s a lack of trust with regard to companies and the government handling people’s personal data,” Urban says. “People don’t feel they have a real choice.”

The CPPA is trying to create more control for citizens — “but that requires allowing people to have access to companies’ information about them so they can make that choice,” Urban says.

California can be a test lab for how to build a future that balances the interests of corporations and citizens, Weinstein adds, but it won’t come from the state’s ballot system, which is too often influenced by a small number of wealthy players. Instead, it should come from companies and government engaging diverse stakeholders in key decisions and issues and more education for people making decisions about their data. “Even if people don’t know the technology, they can voice their values and concerns,” Urban says.

And technologists need to own problems arising from these tools and “not just hide from the threat of regulation,” Weinstein says. He points to Snapchat’s recent move into greater content moderation, such as that related to drug transactions.

In the end, “Our technological future is the responsibility not of CEOs or engineers, but our democracy,” Weinstein concludes. “People have been passive about technology’s impact on society. It’s time to exercise our democratic muscles more fully.”

Stanford HAI's mission is to advance AI research, education, policy and practice to improve the human condition. Learn more. 

Share
Link copied to clipboard!
Contributor(s)
Sachin Waikar

Related News

‘We are Stanford’: Open Minds Event Honors Staff
Stanford Report
Mar 31, 2026
Media Mention

Stanford University President Jon Levin highlights Stanford’s pivotal role in shaping the future of AI, pointing to Stanford HAI as a leader in advancing its ethical development and deployment.

Media Mention
Your browser does not support the video tag.

‘We are Stanford’: Open Minds Event Honors Staff

Stanford Report
Ethics, Equity, InclusionMar 31

Stanford University President Jon Levin highlights Stanford’s pivotal role in shaping the future of AI, pointing to Stanford HAI as a leader in advancing its ethical development and deployment.

Who Decides How America Uses AI in War?
Curtis Langlotz, Amy Zegart, Michele Elam, Jennifer King, Russ Altman
Mar 30, 2026
News
image of drones connected by digital net

As artificial intelligence becomes central to national security, experts grapple with a technology that remains unpredictable, unregulated, and increasingly powerful.

News
image of drones connected by digital net

Who Decides How America Uses AI in War?

Curtis Langlotz, Amy Zegart, Michele Elam, Jennifer King, Russ Altman
Mar 30

As artificial intelligence becomes central to national security, experts grapple with a technology that remains unpredictable, unregulated, and increasingly powerful.

Stop Telling AI Your Secrets - 5 Reasons Why, And What To Do If You Already Overshared
ZD Net
Mar 25, 2026
Media Mention

"The ultimate problem is that you just can't control where the information goes, and it could leak out in ways that you just don't anticipate," says HAI Privacy and Data Policy Fellow Jennifer King.

Media Mention
Your browser does not support the video tag.

Stop Telling AI Your Secrets - 5 Reasons Why, And What To Do If You Already Overshared

ZD Net
Regulation, Policy, GovernanceGenerative AIMar 25

"The ultimate problem is that you just can't control where the information goes, and it could leak out in ways that you just don't anticipate," says HAI Privacy and Data Policy Fellow Jennifer King.