Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Harnessing AI to Improve Access to Justice in Civil Courts | Stanford HAI

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
news

Harnessing AI to Improve Access to Justice in Civil Courts

Date
March 04, 2025
Topics
Law Enforcement and Justice

Stanford law professor advocates for public-sector AI tools to support self-represented litigants in civil cases.

In the United States, 20 million civil cases are filed annually. Of these, 75% involve at least one party without legal representation. 

Many of these cases are small but highly impactful: Rent evictions, debt collection, or mortgage foreclosures. And many defendants don’t show up to plead their cases, resulting in default judgements that can be unjust.

David Engstrom, the LSVF Professor of Law at Stanford University and co-director of the Deborah L. Rhode Center on the Legal Profession, identifies several root causes for this low participation rate. People may struggle with time and resource costs, can’t access legal representation, find the legal process confusing, or face difficulty locating tools online.

Artificial intelligence presents "massive access-widening potential," Engstrom said during a recent seminar at the Stanford Institute for Human-Centered AI. AI could translate legal language, map legal problems to solutions, and automate some court processes. However, he cautioned that this technology poses serious risks from hallucinations and bias that disproportionately affects low-income Americans.

To maximize benefits and mitigate harms, Engstrom encouraged tools that don’t rely on complex legal reasoning but still provide valuable assistance to litigants. 

Engstrom advocated for courthouse AI, not a private-sector legal technology marketplace, to ensure effective and trustworthy AI that assists low-income or unserved Americans. He outlined two paths for courts: They can opt to create their own court-hosted digital tools (the "make" option) or, in effect, “buy” services of the private legal technology marketplace by making themselves more accessible to those providers.. While the “make” option faces challenges due to the court system’s limited technical capacity, the “buy” option is hampered by regulatory restrictions that define who—or in AI’s case, what—can deliver legal services. Additionally, a lack of standardized technology systems across the country’s 14,000 local court jurisdictions limits scalability, which may deter private companies.

Engstrom offered two promising examples of courthouse AI stemming from a collaboration between his law school research team and the Los Angeles Superior Court that launched last year. The team is developing two key AI tools, one for courthouse staff and one for litigants:

  1. An automated "default prove-up" system to review default judgments (entered when a defendant fails to show up) for legal errors before they are entered. This could catch up to 10% of problematic judgments, a major improvement over current manual reviews.

  2. A triage and referral tool to help connect self-represented litigants with appropriate legal help resources and services based on their specific situations.

Engstrom called courthouse AI "one of the most compelling but also underdeveloped areas of public sector AI." He invites more researchers to contribute to this work to responsibly develop AI to serve the public interest.

Watch the full seminar.

Share
Link copied to clipboard!
Authors
  • headshot
    Shana Lynch

Related News

AI-Faked Cases Become Core Issue Irritating Overworked Judges
Bloomberg Law
Dec 29, 2025
Media Mention

As AI-hallucinated case citations flood the courts, judges have increased fines for attorneys who have cited fake cases. HAI Policy Fellow Riana Pfefferkorn hopes this will "make the firm sit up and pay better attention."

Media Mention
Your browser does not support the video tag.

AI-Faked Cases Become Core Issue Irritating Overworked Judges

Bloomberg Law
Generative AILaw Enforcement and JusticeDec 29

As AI-hallucinated case citations flood the courts, judges have increased fines for attorneys who have cited fake cases. HAI Policy Fellow Riana Pfefferkorn hopes this will "make the firm sit up and pay better attention."

Why You Can (And Should) Opt Out Of TSA Facial Recognition Right Now
HuffPost
Nov 06, 2025
Media Mention

Jennifer King, Policy Fellow at the Stanford HAI who specializes in privacy, discusses vagueness in the TSA’s public communications about what they are doing with facial recognition data.

Media Mention
Your browser does not support the video tag.

Why You Can (And Should) Opt Out Of TSA Facial Recognition Right Now

HuffPost
Law Enforcement and JusticePrivacy, Safety, SecurityNov 06

Jennifer King, Policy Fellow at the Stanford HAI who specializes in privacy, discusses vagueness in the TSA’s public communications about what they are doing with facial recognition data.

Our Racist, Terrifying Deepfake Future Is Here
Nature
Nov 03, 2025
Media Mention

“It connects back to my fear that the people with the fewest resources will be most affected by the downsides of AI,” says HAI Policy Fellow Riana Pfefferkorn in response to a viral AI-generated deepfake video.

Media Mention
Your browser does not support the video tag.

Our Racist, Terrifying Deepfake Future Is Here

Nature
Generative AIRegulation, Policy, GovernanceLaw Enforcement and JusticeNov 03

“It connects back to my fear that the people with the fewest resources will be most affected by the downsides of AI,” says HAI Policy Fellow Riana Pfefferkorn in response to a viral AI-generated deepfake video.