Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Labeling AI-Generated Content May Not Change Its Persuasiveness | Stanford HAI

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
policyPolicy Brief

Labeling AI-Generated Content May Not Change Its Persuasiveness

Date
July 30, 2025
Topics
Generative AI
Regulation, Policy, Governance
Read Paper
abstract

This brief evaluates the impact of authorship labels on the persuasiveness of AI-written policy messages.

Key Takeaways

  • In response to the rapidly improving ability of AI tools to create persuasive content, policymakers are increasingly calling for labels on AI-generated content—but little research has measured whether adding a label impacts the persuasiveness of the underlying messages.

  • We surveyed how more than 1,500 people perceive AI-generated policy messages when told the content had been created by an expert AI model, a human policy expert, or told nothing about its authorship.

  • Adding the label changed people’s perceptions of whether the author was AI or human but did not significantly change the persuasiveness of the content itself, regardless of the policy domain (e.g., allowing colleges to pay student athletes) or participant demographics (e.g., political party).

  • Policy proposals requiring AI content labels may enhance transparency, but their inability to affect persuasiveness highlights the need for complementary safeguards (e.g., media literacy education) and ongoing research into how AI disclosure policies shape the information ecosystem. 

Executive Summary

Generative AI tools can now produce persuasive content at previously unprecedented scale and speed. There are many ways in which these tools can be used for positive impact in the world. But the emergence of persuasive, AI-generated content also makes possible many negative uses — such as influence operations, misinformation campaigns, and other kinds of deception — particularly in political contexts. These risks are compounded by a key issue: People struggle to distinguish AI-generated content from content written by humans, which helps influence campaigns and misinformation thrive.

These risks have led policymakers to call for authorship labels on AI-made content. In the European Union, for instance, the AI Act requires that entities deploying AI-generated or -manipulated content label it as such. In the United States, the AI Labeling Act and the AI Disclosure Act, both introduced in Congress in 2023 but not passed, would have implemented similar rules. Policymakers’ calls for labels on AI-made content raise a key question: Will a label change how much the content influences people’s political and public policy views?

In our paper “Labeling Messages as AI-Generated Does Not Reduce Their Persuasive Effects,” we surveyed a diverse group of Americans to investigate how adding authorship labels did or did not affect how they perceive AI-written policy appeals. Across four less-polarized public policy topics, we found no significant difference between people’s support for a policy argument when told the argument had been generated by an expert AI model, a human policy expert, or told nothing about its authorship. The labels also had no significant effects on people’s judgments of the content’s accuracy or people’s intentions to share the policy argument with others.

Policymakers should continue studying and debating effective AI disclosure policies, including how AI content labels may empower users to make more informed decisions about content consumption. Yet, while labels may enhance transparency, our work suggests that on their own, they may be insufficient in addressing the challenges posed by AI-generated information. Policymakers need to conduct further research and explore alternative interventions, including media literacy education and deamplification of AI-generated content.

Introduction

Previous work has explored the persuasiveness of AI-generated content without AI labels; perceptions of the credibility, reliability, or quality of labeled information; and the effect of different labels for AI-generated images on viewers’ beliefs. There remains a gap when it comes to research on the impact of attaching labels to AI-generated text on the content’s persuasiveness. We therefore chose to investigate how AI labels affect persuasiveness of messages on public policy issues. 

There is good reason to assume that AI labels make people more skeptical of the underlying content. For instance, prior research has found that people generally prefer human content over AI content in news, public health messaging, donation solicitation, and social media content because they perceive the human content to be more trustworthy. On the other hand, AI labels could trigger people’s perceptions of technological expertise and sophistication, making them trust the AI-generated information more than if it were written by a human.

Our study tested both these hypotheses through a randomized experiment with 1,500 U.S. participants. We compared participants’ responses to AI-generated policy messages when labeled as AI-created versus human-authored versus unlabeled across four public policy domains: geoengineering, drug importation, college athlete salaries, and social media platform liability. We selected these four domains from the persuasion dataset as they are neither widely discussed nor highly polarizing policy issues — and thus less likely to clash with people’s well-established or deeply held views. 

After surveying participants’ prior knowledge about, support for, and confidence in their beliefs about one of the four public policy topics, we showed them an AI-generated argument about that policy area. We used OpenAI’s GPT-4o to generate the policy messages, manually editing the text only to correct factual errors. Examples of article titles include:

  • “Geoengineering poses too many risks and should not be considered.”

  • “Drug importation jeopardizes safety controls and the domestic pharma industry.”

  • “College athletes should be paid salaries.”

All participants were shown the same message related to their assigned policy issue area. But they were variably told, at random, either that the message had been written by an expert AI model or a human policy expert, or they were given no details regarding authorship. Participants then indicated their levels of support for the policies, confidence in their support, their judgments of how accurate the message was, and their intentions to share it with others. They were also asked about their perceptions of the message’s source. We collected demographic data (e.g., political party affiliation, education level, age) so we could test how background characteristics impact the effects of AI labels.

Read Paper
Share
Link copied to clipboard!
Authors
  • Isabel Gallegos
    Isabel Gallegos
  • Dr. Chen Shani
    Dr. Chen Shani
  • Weiyan Shi
    Weiyan Shi
  • Federico Bianchi
    Federico Bianchi
  • Izzy Benjamin Gainsburg
    Izzy Benjamin Gainsburg
  • Dan Jurafsky
    Dan Jurafsky
  • Robb Willer
    Robb Willer

Related Publications

Russ Altman’s Testimony Before the U.S. Senate Committee on Health, Education, Labor, and Pensions
Russ Altman
Quick ReadOct 09, 2025
Testimony

In this testimony presented to the U.S. Senate Committee on Health, Education, Labor, and Pensions hearing titled “AI’s Potential to Support Patients, Workers, Children, and Families,” Russ Altman highlights opportunities for congressional support to make AI applications for patient care and drug discovery stronger, safer, and human-centered.

Testimony

Russ Altman’s Testimony Before the U.S. Senate Committee on Health, Education, Labor, and Pensions

Russ Altman
HealthcareRegulation, Policy, GovernanceSciences (Social, Health, Biological, Physical)Quick ReadOct 09

In this testimony presented to the U.S. Senate Committee on Health, Education, Labor, and Pensions hearing titled “AI’s Potential to Support Patients, Workers, Children, and Families,” Russ Altman highlights opportunities for congressional support to make AI applications for patient care and drug discovery stronger, safer, and human-centered.

Toward Political Neutrality in AI
Jillian Fisher, Ruth E. Appel, Yulia Tsvetkov, Margaret E. Roberts, Jennifer Pan, Dawn Song, Yejin Choi
Quick ReadSep 10, 2025
Policy Brief

This brief introduces a framework of eight techniques for approximating political neutrality in AI models.

Policy Brief

Toward Political Neutrality in AI

Jillian Fisher, Ruth E. Appel, Yulia Tsvetkov, Margaret E. Roberts, Jennifer Pan, Dawn Song, Yejin Choi
DemocracyGenerative AIQuick ReadSep 10

This brief introduces a framework of eight techniques for approximating political neutrality in AI models.

Michelle M. Mello's Testimony Before the U.S. House Committee on Energy and Commerce Health Subcommittee
Michelle Mello
Quick ReadSep 02, 2025
Testimony

In this testimony presented to the U.S. House Committee on Energy and Commerce’s Subcommittee on Health hearing titled “Examining Opportunities to Advance American Health Care through the Use of Artificial Intelligence Technologies,” Michelle M. Mello calls for policy changes that will promote effective integration of AI tools into healthcare by strengthening trust.

Testimony

Michelle M. Mello's Testimony Before the U.S. House Committee on Energy and Commerce Health Subcommittee

Michelle Mello
HealthcareRegulation, Policy, GovernanceQuick ReadSep 02

In this testimony presented to the U.S. House Committee on Energy and Commerce’s Subcommittee on Health hearing titled “Examining Opportunities to Advance American Health Care through the Use of Artificial Intelligence Technologies,” Michelle M. Mello calls for policy changes that will promote effective integration of AI tools into healthcare by strengthening trust.

Response to the Department of Education’s Request for Information on AI in Education
Victor R. Lee, Vanessa Parli, Isabelle Hau, Patrick Hynes, Daniel Zhang
Quick ReadAug 20, 2025
Response to Request

Stanford scholars respond to a federal RFI on advancing AI in education, urging policymakers to anchor their approach in proven research.

Response to Request

Response to the Department of Education’s Request for Information on AI in Education

Victor R. Lee, Vanessa Parli, Isabelle Hau, Patrick Hynes, Daniel Zhang
Education, SkillsRegulation, Policy, GovernanceQuick ReadAug 20

Stanford scholars respond to a federal RFI on advancing AI in education, urging policymakers to anchor their approach in proven research.