Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Jen King's Testimony Before the U.S. House Committee on Energy and Commerce Oversight and Investigations Subcommittee | Stanford HAI

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
policyTestimony

Jen King's Testimony Before the U.S. House Committee on Energy and Commerce Oversight and Investigations Subcommittee

Date
November 18, 2025
Topics
Privacy, Safety, Security
Read Paper
abstract

In this testimony presented to the U.S. House Committee on Energy and Commerce’s Subcommittee on Oversights and Investigations hearing titled “Innovation with Integrity: Examining the Risks and Benefits of AI Chatbots,” Jen King shares insights on data privacy concerns connected with the use of chatbots. She highlights opportunities for congressional action to protect chatbot users from related harms.

Executive Summary

Americans want limits on the types of data companies collect about them, especially when that data is sensitive personal data related to their health. While technologies designed for and used specifically in healthcare settings are governed by the Health Insurance Portability and Accountability Act, general-purpose tools like chatbots are not. Yet consumers are increasingly turning to these chatbots for health-related concerns, including mental health support. 

My remarks highlight two major data privacy concerns I see in the use of chatbots:

  1. Users are increasingly disclosing highly sensitive personal information to chatbots, which are designed to mimic human conversation and maximize user engagement. Large platforms are contemplating how to monetize this data in other parts of their businesses.

  2. Developers are incorporating chatbot-derived user data into model training without oversight. Their privacy policies demonstrate a lack of transparency regarding whether and how they take steps to mitigate privacy risks, including for children’s data. 

To address these concerns, I recommend three specific areas for congressional attention:

  • Implement data privacy and safety design principles. Demand that chatbot developers institute both data privacy and health and safety design principles that prioritize the trust and well-being of the public.

  • Minimize the scope of personal data in AI training. Mandate that developers make transparent their data collection and processing practices. Users should not be automatically opted in to model training, and developers should proactively remove sensitive data from training sets.

  • Demand that developers adopt safety metrics. Developers must track and report metrics related to user privacy, safety, and experiences of harm and increase vetted researcher access to chatbot training data to ensure independent review and ensure accountability.

Read Paper
Share
Link copied to clipboard!
Authors
  • Jennifer King
    Jennifer King
Related
  • Rethinking Privacy in the AI Era: Policy Provocations for a Data-Centric World
    Jennifer King, Caroline Meinhardt
    Deep DiveFeb 22
    whitepaper

    This white paper explores the current and future impact of privacy and data protection legislation on AI development and provides recommendations for mitigating privacy harms in an AI era.

Related Publications

Validating Claims About AI: A Policymaker’s Guide
Olawale Salaudeen, Anka Reuel, Angelina Wang, Sanmi Koyejo
Quick ReadSep 24, 2025
Policy Brief

This brief proposes a practical validation framework to help policymakers separate legitimate claims about AI systems from unsupported claims.

Policy Brief

Validating Claims About AI: A Policymaker’s Guide

Olawale Salaudeen, Anka Reuel, Angelina Wang, Sanmi Koyejo
Foundation ModelsPrivacy, Safety, SecurityQuick ReadSep 24

This brief proposes a practical validation framework to help policymakers separate legitimate claims about AI systems from unsupported claims.

Addressing AI-Generated Child Sexual Abuse Material: Opportunities for Educational Policy
Riana Pfefferkorn
Quick ReadJul 21, 2025
Policy Brief

This brief explores student misuse of AI-powered “nudify” apps to create child sexual abuse material and highlights gaps in school response and policy.

Policy Brief

Addressing AI-Generated Child Sexual Abuse Material: Opportunities for Educational Policy

Riana Pfefferkorn
Privacy, Safety, SecurityEducation, SkillsQuick ReadJul 21

This brief explores student misuse of AI-powered “nudify” apps to create child sexual abuse material and highlights gaps in school response and policy.

Adverse Event Reporting for AI: Developing the Information Infrastructure Government Needs to Learn and Act
Lindsey A. Gailmard, Drew Spence, Daniel E. Ho
Quick ReadJun 30, 2025
Issue Brief

This brief assesses the benefits of and provides policy recommendations for adverse event reporting systems for AI that report failures and harms post deployment.

Issue Brief

Adverse Event Reporting for AI: Developing the Information Infrastructure Government Needs to Learn and Act

Lindsey A. Gailmard, Drew Spence, Daniel E. Ho
Regulation, Policy, GovernancePrivacy, Safety, SecurityQuick ReadJun 30

This brief assesses the benefits of and provides policy recommendations for adverse event reporting systems for AI that report failures and harms post deployment.

Safeguarding Third-Party AI Research
Kevin Klyman, Shayne Longpre, Sayash Kapoor, Rishi Bommasani, Percy Liang, Peter Henderson
Quick ReadFeb 13, 2025
Policy Brief
Safeguarding third-party AI research

This brief examines the barriers to independent AI evaluation and proposes safe harbors to protect good-faith third-party research.

Policy Brief
Safeguarding third-party AI research

Safeguarding Third-Party AI Research

Kevin Klyman, Shayne Longpre, Sayash Kapoor, Rishi Bommasani, Percy Liang, Peter Henderson
Privacy, Safety, SecurityRegulation, Policy, GovernanceQuick ReadFeb 13

This brief examines the barriers to independent AI evaluation and proposes safe harbors to protect good-faith third-party research.