Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Daniel E. Ho | Stanford HAI

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
peopleFaculty,Senior Fellow

Daniel E. Ho

William Benjamin Scott and Luna M. Scott Professor of Law | Professor of Political Science | Professor of Computer Science (by courtesy) | Senior Fellow, Stanford HAI | Senior Fellow, Stanford Institute for Economic and Policy Research | Director of the Regulation, Evaluation, and Governance Lab (RegLab)

Topics
Democracy
Government, Public Administration
Law Enforcement and Justice
Regulation, Policy, Governance
Dan Ho headshot
External Bio

Daniel E. Ho, J.D., Ph.D., is the William Benjamin Scott and Luna M. Scott Professor of Law, Professor of Political Science, and Senior Fellow at the Stanford Institute for Economic Policy Research at Stanford University. He directs the Regulation, Evaluation, and Governance Lab (RegLab) at Stanford, and is a Faculty Fellow at the Center for Advanced Study in the Behavioral Sciences and Senior Fellow of the Stanford Institute for Human-Centered Artificial Intelligence (HAI).

Share
Link copied to clipboard!

Latest Related to Daniel E. Ho

response to request

Response to OSTP's Request for Information on Accelerating the American Scientific Enterprise

Rishi Bommasani, John Etchemendy, Surya Ganguli, Daniel E. Ho, Guido Imbens, James Landay, Fei-Fei Li, Russell Wald
Sciences (Social, Health, Biological, Physical)Regulation, Policy, GovernanceQuick ReadDec 26

Stanford scholars respond to a federal RFI on scientific discovery, calling for the government to support a new “team science” academic research model for AI-enabled discovery.

issue brief

Adverse Event Reporting for AI: Developing the Information Infrastructure Government Needs to Learn and Act

Lindsey A. Gailmard, Drew Spence, Daniel E. Ho
Regulation, Policy, GovernancePrivacy, Safety, SecurityQuick ReadJun 30

This brief assesses the benefits of and provides policy recommendations for adverse event reporting systems for AI that report failures and harms post deployment.

policy brief

Cleaning Up Policy Sludge: An AI Statutory Research System

Faiz Surani, Lindsey A. Gailmard, Allison Casasola, Varun Magesh, Emily J. Robitschek, Christine Tsang, Derek Ouyang, Daniel E. Ho
Government, Public AdministrationQuick ReadJun 18

This brief introduces a novel AI tool that performs statutory surveys to help governments—such as the San Francisco City Attorney Office—identify policy sludge and accelerate legal reform.

All Related

San Francisco Wants To Use AI To Save Itself From Bureaucracy
POLITICO
Jun 05, 2025
media mention

Daniel E. Ho, HAI Senior Fellow and Director the Stanford RegLab, is working with City Attorney David Chiu to identify and delete old, redundant municipal code sections.

San Francisco Wants To Use AI To Save Itself From Bureaucracy

POLITICO
Jun 05, 2025

Daniel E. Ho, HAI Senior Fellow and Director the Stanford RegLab, is working with City Attorney David Chiu to identify and delete old, redundant municipal code sections.

Government, Public Administration
media mention
Response to OSTP’s Request for Information on the Development of an AI Action Plan
Caroline Meinhardt, Daniel Zhang, Rishi Bommasani, Jennifer King, Russell Wald, Percy Liang, Daniel E. Ho
Mar 17, 2025
response to request

Stanford scholars respond to a federal RFI on the development of an AI Action Plan, urging policymakers to promote open and scientific innovation, craft evidence-based AI policy, and empower government leaders.

Response to OSTP’s Request for Information on the Development of an AI Action Plan

Caroline Meinhardt, Daniel Zhang, Rishi Bommasani, Jennifer King, Russell Wald, Percy Liang, Daniel E. Ho
Mar 17, 2025

Stanford scholars respond to a federal RFI on the development of an AI Action Plan, urging policymakers to promote open and scientific innovation, craft evidence-based AI policy, and empower government leaders.

Regulation, Policy, Governance
response to request
Assessing the Implementation of Federal AI Leadership and Compliance Mandates
Jennifer Wang, Mirac Suzgun, Caroline Meinhardt, Daniel Zhang, Kazia Nowacki, Daniel E. Ho
Deep DiveJan 17, 2025
whitepaper

This white paper assesses federal efforts to advance leadership on AI innovation and governance through recent executive actions and emphasizes the need for senior-level leadership to achieve a whole-of-government approach.

Assessing the Implementation of Federal AI Leadership and Compliance Mandates

Jennifer Wang, Mirac Suzgun, Caroline Meinhardt, Daniel Zhang, Kazia Nowacki, Daniel E. Ho
Deep DiveJan 17, 2025

This white paper assesses federal efforts to advance leadership on AI innovation and governance through recent executive actions and emphasizes the need for senior-level leadership to achieve a whole-of-government approach.

Government, Public Administration
Regulation, Policy, Governance
whitepaper
Stanford AI Model Helps Locate Racist Deeds In Santa Clara County
KQED
Oct 21, 2024
media mention

Stanford's RegLab, directed by HAI Senior Fellow Daniel E. Ho, developed an AI model that helped Santa Clara accelerate the process of flagging and mapping restrictive covenants. 

Stanford AI Model Helps Locate Racist Deeds In Santa Clara County

KQED
Oct 21, 2024

Stanford's RegLab, directed by HAI Senior Fellow Daniel E. Ho, developed an AI model that helped Santa Clara accelerate the process of flagging and mapping restrictive covenants. 

Government, Public Administration
Regulation, Policy, Governance
Law Enforcement and Justice
Machine Learning
Foundation Models
media mention
AI Seeks Out Racist Language in Property Deeds for Termination
Bloomberg Law
Oct 17, 2024
media mention

Dan Ho, HAI Senior Fellow and director of the Stanford RegLab, discusses RegLab's AI model that analyzes decades of property records, helping to identify illegal racially restrictive language in housing documents.

AI Seeks Out Racist Language in Property Deeds for Termination

Bloomberg Law
Oct 17, 2024

Dan Ho, HAI Senior Fellow and director of the Stanford RegLab, discusses RegLab's AI model that analyzes decades of property records, helping to identify illegal racially restrictive language in housing documents.

Machine Learning
Regulation, Policy, Governance
Foundation Models
Law Enforcement and Justice
media mention
Congressional Boot Camp on AI

The Congressional Boot Camp on AI convenes staffers from both the House and Senate on Stanford University’s campus in California. Each session will feature world-class scholars from Stanford University, leaders from Silicon Valley, and pioneers from civil society organizations. The 2025 boot camp was held on August 11-13, 2025.

Congressional Boot Camp on AI

Oct 06, 2024

The Congressional Boot Camp on AI convenes staffers from both the House and Senate on Stanford University’s campus in California. Each session will feature world-class scholars from Stanford University, leaders from Silicon Valley, and pioneers from civil society organizations. The 2025 boot camp was held on August 11-13, 2025.

Response to U.S. AI Safety Institute’s Request for Comment on Managing Misuse Risk For Dual-Use Foundation Models
Rishi Bommasani, Alexander Wan, Yifan Mai, Percy Liang, Daniel E. Ho
Sep 09, 2024
response to request

Stanford scholars respond to a federal RFC on the U.S. AI Safety Institute’s draft guidelines for managing the misuse risk for dual-use foundation models.

Response to U.S. AI Safety Institute’s Request for Comment on Managing Misuse Risk For Dual-Use Foundation Models

Rishi Bommasani, Alexander Wan, Yifan Mai, Percy Liang, Daniel E. Ho
Sep 09, 2024

Stanford scholars respond to a federal RFC on the U.S. AI Safety Institute’s draft guidelines for managing the misuse risk for dual-use foundation models.

Regulation, Policy, Governance
Foundation Models
Privacy, Safety, Security
response to request
AI on Trial: Legal Models Hallucinate in 1 out of 6 (or More) Benchmarking Queries
Faiz Surani, Daniel E. Ho
May 23, 2024
news

A new study reveals the need for benchmarking and public evaluations of AI tools in law.

AI on Trial: Legal Models Hallucinate in 1 out of 6 (or More) Benchmarking Queries

Faiz Surani, Daniel E. Ho
May 23, 2024

A new study reveals the need for benchmarking and public evaluations of AI tools in law.

news
On the Societal Impact of Open Foundation Models
Sayash Kapoor, Rishi Bommasani, Daniel E. Ho, Percy Liang
and Arvind Narayanan
Feb 27, 2024
news

New research adds precision to the debate on openness in AI.

On the Societal Impact of Open Foundation Models

Sayash Kapoor, Rishi Bommasani, Daniel E. Ho, Percy Liang
and Arvind Narayanan
Feb 27, 2024

New research adds precision to the debate on openness in AI.

news
Transparency of AI EO Implementation: An Assessment 90 Days In
Caroline Meinhardt, Kevin Klyman, Hamzah Daud, Christie M. Lawrence, Rohini Kosoglu, Daniel Zhang, Daniel E. Ho
Feb 21, 2024
news

The U.S. government has made swift progress and broadened transparency, but that momentum needs to be maintained for the next looming deadlines.

Transparency of AI EO Implementation: An Assessment 90 Days In

Caroline Meinhardt, Kevin Klyman, Hamzah Daud, Christie M. Lawrence, Rohini Kosoglu, Daniel Zhang, Daniel E. Ho
Feb 21, 2024

The U.S. government has made swift progress and broadened transparency, but that momentum needs to be maintained for the next looming deadlines.

news
Daniel E. Ho's Testimony Before the California Senate Governmental Organization Committee and the Senate Budget and Fiscal Review Subcommittee No. 4 on State Administration and General Government
Daniel E. Ho
Quick ReadFeb 21, 2024
testimony

In this testimony presented in the California Senate Hearing “California at the Forefront: Steering AI Towards Ethical Horizons,” Daniel E. Ho offers three recommendations for how California should lead the nation in responsible AI innovation by nurturing and attracting technical talent into public service, democratizing access to computing and data resources, and addressing the information asymmetry about AI risks.

Daniel E. Ho's Testimony Before the California Senate Governmental Organization Committee and the Senate Budget and Fiscal Review Subcommittee No. 4 on State Administration and General Government

Daniel E. Ho
Quick ReadFeb 21, 2024

In this testimony presented in the California Senate Hearing “California at the Forefront: Steering AI Towards Ethical Horizons,” Daniel E. Ho offers three recommendations for how California should lead the nation in responsible AI innovation by nurturing and attracting technical talent into public service, democratizing access to computing and data resources, and addressing the information asymmetry about AI risks.

Regulation, Policy, Governance
Government, Public Administration
testimony
Generating Medical Errors: GenAI and Erroneous Medical References
Kevin Wu, Eric Wu, Daniel E. Ho, James Zou
Feb 12, 2024
news

A new study finds that large language models used widely for medical assessments cannot back up claims.

Generating Medical Errors: GenAI and Erroneous Medical References

Kevin Wu, Eric Wu, Daniel E. Ho, James Zou
Feb 12, 2024

A new study finds that large language models used widely for medical assessments cannot back up claims.

Healthcare
news
1
2
3
4