As AI use grows, how can we safeguard privacy, security, and data protection for individuals and organizations?
In this workshop we will ask: How might we design information systems for authenticity? We will bring together technologists, journalists, legal experts and archivists, for an interdisciplinary conversation about declining trust in digital content and how we might bolster trust in our information ecosystems.
In this workshop we will ask: How might we design information systems for authenticity? We will bring together technologists, journalists, legal experts and archivists, for an interdisciplinary conversation about declining trust in digital content and how we might bolster trust in our information ecosystems.
Stanford HAI joined global leaders to discuss the balance between AI innovation and safety and explore future policy paths.
Stanford HAI joined global leaders to discuss the balance between AI innovation and safety and explore future policy paths.
This brief examines the barriers to independent AI evaluation and proposes safe harbors to protect good-faith third-party research.
This brief examines the barriers to independent AI evaluation and proposes safe harbors to protect good-faith third-party research.
New research tests large language models for consistency across diverse topics, revealing that while they handle neutral topics reliably, controversial issues lead to varied answers.
New research tests large language models for consistency across diverse topics, revealing that while they handle neutral topics reliably, controversial issues lead to varied answers.
This brief presents a novel assessment framework for evaluating the quality of AI benchmarks and scores 24 benchmarks against the framework.
This brief presents a novel assessment framework for evaluating the quality of AI benchmarks and scores 24 benchmarks against the framework.
In this workshop we will ask: How might we design information systems for authenticity? We will bring together technologists, journalists, legal experts and archivists, for an interdisciplinary conversation about declining trust in digital content and how we might bolster trust in our information ecosystems.
In this workshop we will ask: How might we design information systems for authenticity? We will bring together technologists, journalists, legal experts and archivists, for an interdisciplinary conversation about declining trust in digital content and how we might bolster trust in our information ecosystems.
Stanford HAI joined global leaders to discuss the balance between AI innovation and safety and explore future policy paths.
Stanford HAI joined global leaders to discuss the balance between AI innovation and safety and explore future policy paths.
This brief examines the barriers to independent AI evaluation and proposes safe harbors to protect good-faith third-party research.
This brief examines the barriers to independent AI evaluation and proposes safe harbors to protect good-faith third-party research.
New research tests large language models for consistency across diverse topics, revealing that while they handle neutral topics reliably, controversial issues lead to varied answers.
New research tests large language models for consistency across diverse topics, revealing that while they handle neutral topics reliably, controversial issues lead to varied answers.
This brief presents a novel assessment framework for evaluating the quality of AI benchmarks and scores 24 benchmarks against the framework.
This brief presents a novel assessment framework for evaluating the quality of AI benchmarks and scores 24 benchmarks against the framework.