Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
What is Prompt Engineering? | Stanford HAI
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs

What is Prompt Engineering?

Prompt Engineering is the practice of carefully crafting instructions, or "prompts," to guide AI language models toward producing desired outputs. By adjusting the wording, structure, and context provided in a prompt, users can significantly influence the quality, style, and accuracy of the model's responses. This skill has become essential for effectively using large language models across tasks ranging from creative writing to technical problem-solving to data analysis. 

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News


Prompt Engineering mentioned at Stanford HAI

Explore Similar Terms:

Prompt Injection | Large Language Model (LLM) | Few-Shot Learning

See Full List of Terms & Definitions

To Practice PTSD Treatment, Therapists Are Using AI Patients
Sarah Wells
Nov 10
news
Doctor works on computer in the middle of a therapy session

Stanford's TherapyTrainer deploys AI to help therapists practice skills for written exposure therapy.

To Practice PTSD Treatment, Therapists Are Using AI Patients

Sarah Wells
Nov 10

Stanford's TherapyTrainer deploys AI to help therapists practice skills for written exposure therapy.

Healthcare
Doctor works on computer in the middle of a therapy session
news
How Well Do Large Language Models Support Clinician Information Needs?
Eric Horvitz, Nigam Shah
Dev Dash
Mar 31
news

Stanford experts examine the safety and accuracy of GPT-4 in serving curbside consultation needs of doctors.

How Well Do Large Language Models Support Clinician Information Needs?

Eric Horvitz, Nigam Shah
Dev Dash
Mar 31

Stanford experts examine the safety and accuracy of GPT-4 in serving curbside consultation needs of doctors.

Healthcare
Natural Language Processing
Machine Learning
news
Addressing AI-Generated Child Sexual Abuse Material: Opportunities for Educational Policy
Riana Pfefferkorn
Quick ReadJul 21
policy brief

This brief explores student misuse of AI-powered “nudify” apps to create child sexual abuse material and highlights gaps in school response and policy.

Addressing AI-Generated Child Sexual Abuse Material: Opportunities for Educational Policy

Riana Pfefferkorn
Quick ReadJul 21

This brief explores student misuse of AI-powered “nudify” apps to create child sexual abuse material and highlights gaps in school response and policy.

Privacy, Safety, Security
Education, Skills
policy brief
Toward Responsible Development and Evaluation of LLMs in Psychotherapy
Elizabeth C. Stade, Shannon Wiltsey Stirman, Lyle Ungar, Cody L. Boland, H. Andrew Schwartz, David B. Yaden, João Sedoc, Robert J. DeRubeis, Robb Willer, Jane P. Kim, Johannes Eichstaedt
Quick ReadJun 13
policy brief

This brief reviews the current landscape of LLMs developed for psychotherapy and proposes a framework for evaluating the readiness of these AI tools for clinical deployment.

Toward Responsible Development and Evaluation of LLMs in Psychotherapy

Elizabeth C. Stade, Shannon Wiltsey Stirman, Lyle Ungar, Cody L. Boland, H. Andrew Schwartz, David B. Yaden, João Sedoc, Robert J. DeRubeis, Robb Willer, Jane P. Kim, Johannes Eichstaedt
Quick ReadJun 13

This brief reviews the current landscape of LLMs developed for psychotherapy and proposes a framework for evaluating the readiness of these AI tools for clinical deployment.

Healthcare
policy brief
Who Is Liable When Generative AI Says Something Harmful?
Peter Henderson
Oct 11
news

Courts will have to grapple with this new challenge, although scholars believe much of generative AI will be protected by the First Amendment.

Who Is Liable When Generative AI Says Something Harmful?

Peter Henderson
Oct 11

Courts will have to grapple with this new challenge, although scholars believe much of generative AI will be protected by the First Amendment.

news

Enroll in a Human-Centered AI Course

This AI program covers technical fundamentals, business implications, and societal considerations.