Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Privacy, Safety, Security | Stanford HAI

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
Back to Privacy, Safety, Security

All Work Published on Privacy, Safety, Security

Preparing for the Age of Deepfakes and Disinformation
Dan Boneh, Andrew J. Grotto, Patrick McDaniel, Nicolas Papernot
Quick ReadNov 01, 2020
Policy Brief

This brief warns of the dangers of generative adversarial networks that can make realistic deepfakes, calling for comprehensive norms, regulations, and laws to counter AI-driven disinformation.

Preparing for the Age of Deepfakes and Disinformation

Dan Boneh, Andrew J. Grotto, Patrick McDaniel, Nicolas Papernot
Quick ReadNov 01, 2020

This brief warns of the dangers of generative adversarial networks that can make realistic deepfakes, calling for comprehensive norms, regulations, and laws to counter AI-driven disinformation.

Communications, Media
Privacy, Safety, Security
Policy Brief
The Digitalist Papers: A Vision for AI and Democracy
Nick Adams Pandolfo
Sep 24, 2024
News

Stanford’s Digital Economy Lab taps multidisciplinary group of thinkers to offer insights on AI and governance in volume called The Digitalist Papers.

The Digitalist Papers: A Vision for AI and Democracy

Nick Adams Pandolfo
Sep 24, 2024

Stanford’s Digital Economy Lab taps multidisciplinary group of thinkers to offer insights on AI and governance in volume called The Digitalist Papers.

Democracy
Economy, Markets
Privacy, Safety, Security
Regulation, Policy, Governance
News
Riana Pfefferkorn: At the Intersection of Technology and Civil Liberties
Shana Lynch
Sep 24, 2024
News

Stanford HAI’s new Policy Fellow will study AI’s implications for privacy and safety, and explore how we can build rights-respecting artificial intelligence.

Riana Pfefferkorn: At the Intersection of Technology and Civil Liberties

Shana Lynch
Sep 24, 2024

Stanford HAI’s new Policy Fellow will study AI’s implications for privacy and safety, and explore how we can build rights-respecting artificial intelligence.

Law Enforcement and Justice
Privacy, Safety, Security
Regulation, Policy, Governance
News
Real AI Threats Are Disinformation, Bias, And Lack Of Transparency: Stanford’s James Landay
The Economic Times
Jul 30, 2024
Media Mention

James Landay, Co-Founder of Stanford HAI, says disinformation, deepfake, discrimi­nation and job displacement; of which not a lot has happened yet, are the real harms of AI. 

Real AI Threats Are Disinformation, Bias, And Lack Of Transparency: Stanford’s James Landay

The Economic Times
Jul 30, 2024

James Landay, Co-Founder of Stanford HAI, says disinformation, deepfake, discrimi­nation and job displacement; of which not a lot has happened yet, are the real harms of AI. 

Workforce, Labor
Privacy, Safety, Security
Ethics, Equity, Inclusion
Media Mention
AI Companies Promised To Self-Regulate One Year Ago. What’s Changed?
MIT Technology Review
Jul 22, 2024
Media Mention

CRFM Society Lead Rishi Bommasani comments on the lack of clarity on what has changed in the year since major AI companies adopted the White House's set of eight voluntary commitments on how to develop AI in a safe and trustworthy way.

AI Companies Promised To Self-Regulate One Year Ago. What’s Changed?

MIT Technology Review
Jul 22, 2024

CRFM Society Lead Rishi Bommasani comments on the lack of clarity on what has changed in the year since major AI companies adopted the White House's set of eight voluntary commitments on how to develop AI in a safe and trustworthy way.

Privacy, Safety, Security
Government, Public Administration
Media Mention
What AI Is The Best? Chatbot Arena Relies On Millions Of Human Votes
Forbes
Jul 18, 2024
Media Mention

Vanessa Parli, HAI Director of Research Programs, explains the importance of evaluation methods when it comes to AI benchmarking, noting the significance of assessing traits like "bias, toxicity, truthfulness, and other responsibility aspects."

What AI Is The Best? Chatbot Arena Relies On Millions Of Human Votes

Forbes
Jul 18, 2024

Vanessa Parli, HAI Director of Research Programs, explains the importance of evaluation methods when it comes to AI benchmarking, noting the significance of assessing traits like "bias, toxicity, truthfulness, and other responsibility aspects."

Generative AI
Privacy, Safety, Security
Ethics, Equity, Inclusion
Media Mention
5
6
7
8
9