Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.
Sign Up For Latest News
Stanford HAI Senior Fellow Daniel E. Ho comments on his research on legal hallucinations in large language models and the viability of using similar models for judicial interpretation.
Stanford HAI Senior Fellow Daniel E. Ho comments on his research on legal hallucinations in large language models and the viability of using similar models for judicial interpretation.
Stanford HAI Senior Fellow Daniel E. Ho's research explains why retrieval-augmented generation (RAG) based legal research tools still make mistakes and struggle to complete legal researching tasks.
Stanford HAI Senior Fellow Daniel E. Ho's research explains why retrieval-augmented generation (RAG) based legal research tools still make mistakes and struggle to complete legal researching tasks.
Stanford HAI Senior Fellow Dan Ho gives input on how to reduce AI hallucinations and discusses his research into AI legal tools that rely on retrieval augmented generation.
Stanford HAI Senior Fellow Dan Ho gives input on how to reduce AI hallucinations and discusses his research into AI legal tools that rely on retrieval augmented generation.
A new study on AI legal research copilots co-authored by HAI senior fellows Daniel Ho and Chris Manning reveals that while retrieval augmented generation (RAG) reduces hallucination rates, they remain higher than ideal.
A new study on AI legal research copilots co-authored by HAI senior fellows Daniel Ho and Chris Manning reveals that while retrieval augmented generation (RAG) reduces hallucination rates, they remain higher than ideal.
HAI Senior Fellow Daniel E. Ho shows large language models hallucinate frequently when used for legal queries.
HAI Senior Fellow Daniel E. Ho shows large language models hallucinate frequently when used for legal queries.