What is Prompt Injection? | Stanford HAI
Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs

What is Prompt Injection?

Prompt Injection is a type of security attack that uses malicious input to trick a Large Language Model (LLM) into behaving in an unintended way. By crafting a deceptive prompt, an attacker can cause the model to bypass its safety guidelines, reveal sensitive information, or follow harmful instructions it was designed to refuse. This vulnerability exploits how the model processes and prioritizes instructions, essentially hijacking its intended function

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News


Prompt Injection mentioned at Stanford HAI

Explore Similar Terms:

Prompt Injection | AI Safety | AI Alignment

See Full List of Terms & Definitions

The Shaky Foundations of Foundation Models in Healthcare
Michael Wornow, Scott Fleming, Jason Fries, Nigam Shah
Feb 27
news

Scholars detail the current state of large language models in healthcare and advocate for better evaluation frameworks.

The Shaky Foundations of Foundation Models in Healthcare

Michael Wornow, Scott Fleming, Jason Fries, Nigam Shah
Feb 27

Scholars detail the current state of large language models in healthcare and advocate for better evaluation frameworks.

Healthcare
Machine Learning
news
Addressing AI-Generated Child Sexual Abuse Material: Opportunities for Educational Policy
Riana Pfefferkorn
Quick ReadJul 21
policy brief

This brief explores student misuse of AI-powered “nudify” apps to create child sexual abuse material and highlights gaps in school response and policy.

Addressing AI-Generated Child Sexual Abuse Material: Opportunities for Educational Policy

Riana Pfefferkorn
Quick ReadJul 21

This brief explores student misuse of AI-powered “nudify” apps to create child sexual abuse material and highlights gaps in school response and policy.

Privacy, Safety, Security
Education, Skills
policy brief
Toward Responsible Development and Evaluation of LLMs in Psychotherapy
Elizabeth C. Stade, Shannon Wiltsey Stirman, Lyle Ungar, Cody L. Boland, H. Andrew Schwartz, David B. Yaden, João Sedoc, Robert J. DeRubeis, Robb Willer, Jane P. Kim, Johannes Eichstaedt
Quick ReadJun 13
policy brief

This brief reviews the current landscape of LLMs developed for psychotherapy and proposes a framework for evaluating the readiness of these AI tools for clinical deployment.

Toward Responsible Development and Evaluation of LLMs in Psychotherapy

Elizabeth C. Stade, Shannon Wiltsey Stirman, Lyle Ungar, Cody L. Boland, H. Andrew Schwartz, David B. Yaden, João Sedoc, Robert J. DeRubeis, Robb Willer, Jane P. Kim, Johannes Eichstaedt
Quick ReadJun 13

This brief reviews the current landscape of LLMs developed for psychotherapy and proposes a framework for evaluating the readiness of these AI tools for clinical deployment.

Healthcare
policy brief
Who Is Liable When Generative AI Says Something Harmful?
Peter Henderson
Oct 11
news

Courts will have to grapple with this new challenge, although scholars believe much of generative AI will be protected by the First Amendment.

Who Is Liable When Generative AI Says Something Harmful?

Peter Henderson
Oct 11

Courts will have to grapple with this new challenge, although scholars believe much of generative AI will be protected by the First Amendment.

news

Enroll in a Human-Centered AI Course

This AI program covers technical fundamentals, business implications, and societal considerations.