Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Law Enforcement and Justice | Stanford HAI
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Back to Law Enforcement and Justice

All Work Published on Law Enforcement and Justice

Law Clerk vs. AI? Courthouse Test Highlights Judicial Curiosity
Bloomberg Law
Jul 03, 2024
Media Mention

Stanford HAI Senior Fellow Daniel E. Ho comments on his research on legal hallucinations in large language models and the viability of using similar models for judicial interpretation.

Law Clerk vs. AI? Courthouse Test Highlights Judicial Curiosity

Bloomberg Law
Jul 03, 2024

Stanford HAI Senior Fellow Daniel E. Ho comments on his research on legal hallucinations in large language models and the viability of using similar models for judicial interpretation.

Natural Language Processing
Foundation Models
Law Enforcement and Justice
Media Mention
AI Accuracy in Legal Research Remains in ‘Check Your Work’ Phase
Bloomberg Law
Jul 02, 2024
Media Mention

Stanford HAI Senior Fellow Daniel E. Ho's research explains why retrieval-augmented generation (RAG) based legal research tools still make mistakes and struggle to complete legal researching tasks.

AI Accuracy in Legal Research Remains in ‘Check Your Work’ Phase

Bloomberg Law
Jul 02, 2024

Stanford HAI Senior Fellow Daniel E. Ho's research explains why retrieval-augmented generation (RAG) based legal research tools still make mistakes and struggle to complete legal researching tasks.

Law Enforcement and Justice
Generative AI
Media Mention
Reduce AI Hallucinations With This Neat Software Trick
WIRED
Jun 14, 2024
Media Mention

Stanford HAI Senior Fellow Dan Ho gives input on how to reduce AI hallucinations and discusses his research into AI legal tools that rely on retrieval augmented generation.

Reduce AI Hallucinations With This Neat Software Trick

WIRED
Jun 14, 2024

Stanford HAI Senior Fellow Dan Ho gives input on how to reduce AI hallucinations and discusses his research into AI legal tools that rely on retrieval augmented generation.

Generative AI
Law Enforcement and Justice
Media Mention
What A Study of AI Copilots for Lawyers Says About the Future of AI for Everyone
Fortune
Jun 04, 2024
Media Mention

A new study on AI legal research copilots co-authored by HAI senior fellows Daniel Ho and Chris Manning reveals that while retrieval augmented generation (RAG) reduces hallucination rates, they remain higher than ideal.

What A Study of AI Copilots for Lawyers Says About the Future of AI for Everyone

Fortune
Jun 04, 2024

A new study on AI legal research copilots co-authored by HAI senior fellows Daniel Ho and Chris Manning reveals that while retrieval augmented generation (RAG) reduces hallucination rates, they remain higher than ideal.

Law Enforcement and Justice
Media Mention
AI Can’t Handle the Truth When it Comes to the Law
Marketplace
Mar 12, 2024
Media Mention

HAI Senior Fellow Daniel E. Ho shows large language models hallucinate frequently when used for legal queries. 

AI Can’t Handle the Truth When it Comes to the Law

Marketplace
Mar 12, 2024

HAI Senior Fellow Daniel E. Ho shows large language models hallucinate frequently when used for legal queries. 

Machine Learning
Natural Language Processing
Law Enforcement and Justice
Media Mention
1
2