Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Privacy, Safety, Security | Stanford HAI
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Back to Privacy, Safety, Security

All Work Published on Privacy, Safety, Security

Preparing for the Age of Deepfakes and Disinformation
Dan Boneh, Andrew J. Grotto, Patrick McDaniel, Nicolas Papernot
Quick ReadNov 01, 2020
Policy Brief

This brief warns of the dangers of generative adversarial networks that can make realistic deepfakes, calling for comprehensive norms, regulations, and laws to counter AI-driven disinformation.

Preparing for the Age of Deepfakes and Disinformation

Dan Boneh, Andrew J. Grotto, Patrick McDaniel, Nicolas Papernot
Quick ReadNov 01, 2020

This brief warns of the dangers of generative adversarial networks that can make realistic deepfakes, calling for comprehensive norms, regulations, and laws to counter AI-driven disinformation.

Communications, Media
Privacy, Safety, Security
Policy Brief
Stanford HAI Engages ASEAN Leaders in Critical AI Dialogue Amidst Regional Challenges
Drew Spence
Oct 23, 2024
News

At a workshop preceding ASEAN’s Ministerial Meeting, Stanford faculty and ASEAN delegates explored AI’s impact on governance, fairness, and regional cooperation. 

Stanford HAI Engages ASEAN Leaders in Critical AI Dialogue Amidst Regional Challenges

Drew Spence
Oct 23, 2024

At a workshop preceding ASEAN’s Ministerial Meeting, Stanford faculty and ASEAN delegates explored AI’s impact on governance, fairness, and regional cooperation. 

Privacy, Safety, Security
Machine Learning
News
OpenAI Fast-Tracks AI Agents. How Do We Balance Benefits With Risks?
Forbes
Oct 04, 2024
Media Mention

Peter Norvig, Distinguished Education Fellow at the Stanford HAI, comments on how limiting the budget at an AI agent’s disposal as well as transaction times and capabilities can help AI agents “operate safely within defined boundaries."

OpenAI Fast-Tracks AI Agents. How Do We Balance Benefits With Risks?

Forbes
Oct 04, 2024

Peter Norvig, Distinguished Education Fellow at the Stanford HAI, comments on how limiting the budget at an AI agent’s disposal as well as transaction times and capabilities can help AI agents “operate safely within defined boundaries."

Ethics, Equity, Inclusion
Privacy, Safety, Security
Media Mention
The Digitalist Papers: A Vision for AI and Democracy
Nick Adams Pandolfo
Sep 24, 2024
News

Stanford’s Digital Economy Lab taps multidisciplinary group of thinkers to offer insights on AI and governance in volume called The Digitalist Papers.

The Digitalist Papers: A Vision for AI and Democracy

Nick Adams Pandolfo
Sep 24, 2024

Stanford’s Digital Economy Lab taps multidisciplinary group of thinkers to offer insights on AI and governance in volume called The Digitalist Papers.

Democracy
Economy, Markets
Privacy, Safety, Security
Regulation, Policy, Governance
News
Riana Pfefferkorn: At the Intersection of Technology and Civil Liberties
Shana Lynch
Sep 24, 2024
News

Stanford HAI’s new Policy Fellow will study AI’s implications for privacy and safety, and explore how we can build rights-respecting artificial intelligence.

Riana Pfefferkorn: At the Intersection of Technology and Civil Liberties

Shana Lynch
Sep 24, 2024

Stanford HAI’s new Policy Fellow will study AI’s implications for privacy and safety, and explore how we can build rights-respecting artificial intelligence.

Law Enforcement and Justice
Privacy, Safety, Security
Regulation, Policy, Governance
News
Real AI Threats Are Disinformation, Bias, And Lack Of Transparency: Stanford’s James Landay
The Economic Times
Jul 30, 2024
Media Mention

James Landay, Co-Founder of Stanford HAI, says disinformation, deepfake, discrimi­nation and job displacement; of which not a lot has happened yet, are the real harms of AI. 

Real AI Threats Are Disinformation, Bias, And Lack Of Transparency: Stanford’s James Landay

The Economic Times
Jul 30, 2024

James Landay, Co-Founder of Stanford HAI, says disinformation, deepfake, discrimi­nation and job displacement; of which not a lot has happened yet, are the real harms of AI. 

Workforce, Labor
Privacy, Safety, Security
Ethics, Equity, Inclusion
Media Mention
5
6
7
8
9