Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Daniel E. Ho | Stanford HAI

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
peopleFaculty,Senior Fellow

Daniel E. Ho

William Benjamin Scott and Luna M. Scott Professor of Law | Professor of Political Science | Professor of Computer Science (by courtesy) | Senior Fellow, Stanford HAI | Senior Fellow, Stanford Institute for Economic and Policy Research | Director of the Regulation, Evaluation, and Governance Lab (RegLab)

Topics
Democracy
Government, Public Administration
Law Enforcement and Justice
Regulation, Policy, Governance
Dan Ho headshot
External Bio

Daniel E. Ho, J.D., Ph.D., is the William Benjamin Scott and Luna M. Scott Professor of Law, Professor of Political Science, and Senior Fellow at the Stanford Institute for Economic Policy Research at Stanford University. He directs the Regulation, Evaluation, and Governance Lab (RegLab) at Stanford, and is a Faculty Fellow at the Center for Advanced Study in the Behavioral Sciences and Senior Fellow of the Stanford Institute for Human-Centered Artificial Intelligence (HAI).

Share
Link copied to clipboard!

Latest Related to Daniel E. Ho

policy brief

Cleaning Up Policy Sludge: An AI Statutory Research System

Daniel E. Ho, Derek Ouyang, Lindsey A. Gailmard, Faiz Surani, Allison Casasola, Varun Magesh, Emily J. Robitschek, Christine Tsang
Government, Public AdministrationJun 18

This brief introduces a novel AI tool that performs statutory surveys to help governments—such as the San Francisco City Attorney Office—identify policy sludge and accelerate legal reform.

media mention
Your browser does not support the video tag.

San Francisco Wants To Use AI To Save Itself From Bureaucracy

POLITICO
Government, Public AdministrationJun 05

Daniel E. Ho, HAI Senior Fellow and Director the Stanford RegLab, is working with City Attorney David Chiu to identify and delete old, redundant municipal code sections.

response to request

Response to OSTP’s Request for Information on the Development of an AI Action Plan

Rishi Bommasani, Daniel E. Ho, Caroline Meinhardt, Daniel Zhang, Percy Liang, Jennifer King, Russell Wald
Regulation, Policy, GovernanceMar 17

In this response to a request for information issued by the National Science Foundation’s Networking and Information Technology Research and Development National Coordination Office (on behalf of the Office of Science and Technology Policy), scholars from Stanford HAI, CRFM, and RegLab urge policymakers to prioritize four areas of policy action in their AI Action Plan: 1) Promote open innovation as a strategic advantage for U.S. competitiveness; 2) Maintain U.S. AI leadership by promoting scientific innovation; 3) Craft evidence-based AI policy that protects Americans without stifling innovation; 4) Empower government leaders with resources and technical expertise to ensure a “whole-of-government” approach to AI governance.

All Related

Assessing the Implementation of Federal AI Leadership and Compliance Mandates
Daniel E. Ho, Caroline Meinhardt, Daniel Zhang, Mirac Suzgun, Jennifer Wang, Kazia Nowacki
Jan 17, 2025
whitepaper

This white paper assesses federal efforts to advance leadership on AI innovation and governance through recent executive actions and emphasizes the need for senior-level leadership to achieve a whole-of-government approach.

Assessing the Implementation of Federal AI Leadership and Compliance Mandates

Daniel E. Ho, Caroline Meinhardt, Daniel Zhang, Mirac Suzgun, Jennifer Wang, Kazia Nowacki
Jan 17, 2025

This white paper assesses federal efforts to advance leadership on AI innovation and governance through recent executive actions and emphasizes the need for senior-level leadership to achieve a whole-of-government approach.

Government, Public Administration
Regulation, Policy, Governance
whitepaper
Stanford AI Model Helps Locate Racist Deeds In Santa Clara County
KQED
Oct 21, 2024
media mention

Stanford's RegLab, directed by HAI Senior Fellow Daniel E. Ho, developed an AI model that helped Santa Clara accelerate the process of flagging and mapping restrictive covenants. 

Stanford AI Model Helps Locate Racist Deeds In Santa Clara County

KQED
Oct 21, 2024

Stanford's RegLab, directed by HAI Senior Fellow Daniel E. Ho, developed an AI model that helped Santa Clara accelerate the process of flagging and mapping restrictive covenants. 

Government, Public Administration
Regulation, Policy, Governance
Law Enforcement and Justice
Machine Learning
Foundation Models
media mention
AI Seeks Out Racist Language in Property Deeds for Termination
Bloomberg Law
Oct 17, 2024
media mention

Dan Ho, HAI Senior Fellow and director of the Stanford RegLab, discusses RegLab's AI model that analyzes decades of property records, helping to identify illegal racially restrictive language in housing documents.

AI Seeks Out Racist Language in Property Deeds for Termination

Bloomberg Law
Oct 17, 2024

Dan Ho, HAI Senior Fellow and director of the Stanford RegLab, discusses RegLab's AI model that analyzes decades of property records, helping to identify illegal racially restrictive language in housing documents.

Machine Learning
Regulation, Policy, Governance
Foundation Models
Law Enforcement and Justice
media mention
Congressional Boot Camp on AI

The Congressional Boot Camp on AI convenes staffers from both the House and Senate on Stanford University’s campus in California. Each session will feature world-class scholars from Stanford University, leaders from Silicon Valley, and pioneers from civil society organizations. The 2025 boot camp will be held August 11-13, 2025.

Congressional Boot Camp on AI

Oct 06, 2024

The Congressional Boot Camp on AI convenes staffers from both the House and Senate on Stanford University’s campus in California. Each session will feature world-class scholars from Stanford University, leaders from Silicon Valley, and pioneers from civil society organizations. The 2025 boot camp will be held August 11-13, 2025.

Response to U.S. AI Safety Institute’s Request for Comment on Managing Misuse Risk For Dual-Use Foundation Models
Rishi Bommasani, Daniel E. Ho, Percy Liang, Alexander Wan, Yifan Mai
Sep 09, 2024
response to request

In this response to the U.S. AI Safety Institute’s (US AISI) request for comment on its draft guidelines for managing the misuse risk for dual-use foundation models, scholars from Stanford HAI, the Center for Research on Foundation Models (CRFM), and the Regulation, Evaluation, and Governance Lab (RegLab) urge the US AISI to strengthen its guidance on reproducible evaluations and third- party evaluations, as well as clarify guidance on post-deployment monitoring. They also encourage the institute to develop similar guidance for other actors in the foundation model supply chain and for non-misuse risks, while ensuring the continued open release of foundation models absent evidence of marginal risk.

Response to U.S. AI Safety Institute’s Request for Comment on Managing Misuse Risk For Dual-Use Foundation Models

Rishi Bommasani, Daniel E. Ho, Percy Liang, Alexander Wan, Yifan Mai
Sep 09, 2024

In this response to the U.S. AI Safety Institute’s (US AISI) request for comment on its draft guidelines for managing the misuse risk for dual-use foundation models, scholars from Stanford HAI, the Center for Research on Foundation Models (CRFM), and the Regulation, Evaluation, and Governance Lab (RegLab) urge the US AISI to strengthen its guidance on reproducible evaluations and third- party evaluations, as well as clarify guidance on post-deployment monitoring. They also encourage the institute to develop similar guidance for other actors in the foundation model supply chain and for non-misuse risks, while ensuring the continued open release of foundation models absent evidence of marginal risk.

Regulation, Policy, Governance
Foundation Models
Privacy, Safety, Security
response to request
AI on Trial: Legal Models Hallucinate in 1 out of 6 (or More) Benchmarking Queries
Daniel E. Ho, Faiz Surani
May 23, 2024
news

A new study reveals the need for benchmarking and public evaluations of AI tools in law.

AI on Trial: Legal Models Hallucinate in 1 out of 6 (or More) Benchmarking Queries

Daniel E. Ho, Faiz Surani
May 23, 2024

A new study reveals the need for benchmarking and public evaluations of AI tools in law.

news
On the Societal Impact of Open Foundation Models
Rishi Bommasani, Daniel E. Ho, Percy Liang, Sayash Kapoor
and Arvind Narayanan
Feb 27, 2024
news

New research adds precision to the debate on openness in AI.

On the Societal Impact of Open Foundation Models

Rishi Bommasani, Daniel E. Ho, Percy Liang, Sayash Kapoor
and Arvind Narayanan
Feb 27, 2024

New research adds precision to the debate on openness in AI.

news
Transparency of AI EO Implementation: An Assessment 90 Days In
Rohini Kosoglu, Daniel E. Ho, Caroline Meinhardt, Daniel Zhang, Hamzah Daud, Kevin Klyman, Christie M. Lawrence
Feb 22, 2024
explainer
Your browser does not support the video tag.

The U.S. government has made swift progress and broadened transparency, but that momentum needs to be maintained for the next looming deadlines.

Transparency of AI EO Implementation: An Assessment 90 Days In

Rohini Kosoglu, Daniel E. Ho, Caroline Meinhardt, Daniel Zhang, Hamzah Daud, Kevin Klyman, Christie M. Lawrence
Feb 22, 2024

The U.S. government has made swift progress and broadened transparency, but that momentum needs to be maintained for the next looming deadlines.

Regulation, Policy, Governance
Government, Public Administration
Your browser does not support the video tag.
explainer
Transparency of AI EO Implementation: An Assessment 90 Days In
Rohini Kosoglu, Daniel E. Ho, Caroline Meinhardt, Daniel Zhang, Hamzah Daud, Kevin Klyman, Christie M. Lawrence
Feb 21, 2024
news

The U.S. government has made swift progress and broadened transparency, but that momentum needs to be maintained for the next looming deadlines.

Transparency of AI EO Implementation: An Assessment 90 Days In

Rohini Kosoglu, Daniel E. Ho, Caroline Meinhardt, Daniel Zhang, Hamzah Daud, Kevin Klyman, Christie M. Lawrence
Feb 21, 2024

The U.S. government has made swift progress and broadened transparency, but that momentum needs to be maintained for the next looming deadlines.

news
Daniel E. Ho's Testimony Before the California Senate Governmental Organization Committee and the Senate Budget and Fiscal Review Subcommittee No. 4 on State Administration and General Government
Daniel E. Ho
Feb 21, 2024
testimony

In this testimony presented in the California Senate Hearing “California at the Forefront: Steering AI Towards Ethical Horizons,” Daniel E. Ho offers three recommendations for how California should lead the nation in responsible AI innovation by nurturing and attracting technical talent into public service, democratizing access to computing and data resources, and addressing the information asymmetry about AI risks.

Daniel E. Ho's Testimony Before the California Senate Governmental Organization Committee and the Senate Budget and Fiscal Review Subcommittee No. 4 on State Administration and General Government

Daniel E. Ho
Feb 21, 2024

In this testimony presented in the California Senate Hearing “California at the Forefront: Steering AI Towards Ethical Horizons,” Daniel E. Ho offers three recommendations for how California should lead the nation in responsible AI innovation by nurturing and attracting technical talent into public service, democratizing access to computing and data resources, and addressing the information asymmetry about AI risks.

Regulation, Policy, Governance
testimony
Generating Medical Errors: GenAI and Erroneous Medical References
James Zou, Daniel E. Ho, Kevin Wu, Eric Wu
Feb 12, 2024
news

A new study finds that large language models used widely for medical assessments cannot back up claims.

Generating Medical Errors: GenAI and Erroneous Medical References

James Zou, Daniel E. Ho, Kevin Wu, Eric Wu
Feb 12, 2024

A new study finds that large language models used widely for medical assessments cannot back up claims.

Healthcare
news
Hallucinating Law: Legal Mistakes with Large Language Models are Pervasive
Daniel E. Ho
Matthew Dahl, Varun Magesh, Mirac Suzgun
Jan 11, 2024
news

A new study finds disturbing and pervasive errors among three popular models on a wide range of legal tasks.

Hallucinating Law: Legal Mistakes with Large Language Models are Pervasive

Daniel E. Ho
Matthew Dahl, Varun Magesh, Mirac Suzgun
Jan 11, 2024

A new study finds disturbing and pervasive errors among three popular models on a wide range of legal tasks.

news
1
2
3
4