Digital Advertisers Often Fund Misinformation Unwittingly | Stanford HAI
Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
Navigate
  • About
  • Events
  • AI Glossary
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

news

Digital Advertisers Often Fund Misinformation Unwittingly

Date
June 07, 2024
Topics
Economy, Markets
Finance, Business

The bulk purchase of digital ads through automated placement platforms often matches legitimate ad buyers with websites that traffic in false and misleading information.

The proliferation of misinformation during recent elections, in the climate debate, and through the pandemic have led social scientists to investigate how websites masquerading as legitimate news outlets stay afloat financially. Truth is, they do it the old-fashioned way: by selling ads. 

In a new paper, researchers from Stanford University, including Digital Economy Lab Director Erik Brynjolfsson, explain how many legitimate, well-meaning companies with deep advertising budgets may be blindly buying ad placements on such websites. Consequently, such companies end up unwittingly funding misinformation while doing reputational damage to their brands when their ads lend tacit approval to misinformers and appear in close proximity to false or misleading information. 

“Publishers of misinformation can make a lot of money publishing these false or misleading statements on a range of socially relevant topics—public health, political content, and environmental news. The key to tackling misinformation is to cut off its funding,” notes Wajeeha Ahmad, a doctoral student in management science and engineering and first author of the study.  

Know Where Your Ads Go

The problem stems from how digital ads are purchased. In traditional media, human ad buyers serve as intermediaries for advertisers, vetting the legitimacy of newspapers, television programs, and magazines where ads will run. The buyers therefore buy ads in news sources they trust.  

Read the study, "Companies Inadvertently Fund Online Misinformation Despite Consumer Backlash"

 

In the digital ad world, however, that purchasing process is usually automated. The vast majority of online display advertising today is done via digital ad platforms that automatically distribute ads across millions of websites.

“Advertisers get lots of information about the demographics of the people who see their ads, but not much about the content on the websites,” says Brynjolfsson, the Jerry Yang and Akiko Yamazaki Professor and a senior fellow at the Stanford Institute for Human-Centered AI (HAI). “As a consequence of this lack of transparency on the platforms, the advertisers are often unaware their ads are appearing on these websites and funding misinformation.”

Supply-Side Economics

With the root causes laid out, the authors then turn to what to do about the problem. Their recommendations are twofold. First, digital ad platforms can make it easier for advertisers to learn when their ads are appearing on questionable websites, in essence allowing them to participate and to discriminate in their ad placement decisions in a model that is more consistent with traditional ad buying. 

Second, the authors suggest making it easier for people to learn which companies are financing misinformation—and whether they are doing so purposefully or unwittingly. This transparency would allow consumers concerned about misinformation to boycott or avoid companies that intentionally fund misinformation and let company decision-makers know when their ads are appearing on questionable websites.

“This would, in effect, demonetize misinformation,” says co-author Chuck Eesley, the

W.M. Keck Foundation Faculty Scholar at the department of management science and engineering. “It would encourage advertisers to steer their ads away from suspect outlets, decrease unintentional ad revenue and, in time, put misinformation sites out of business.”

“It’s simple supply-side economics,” says Ahmad. “The key to disincentivizing the supply of misinformation is greater transparency for both consumers and advertisers.” 

Ensuring that both have better and more timely knowledge about the consequences of their purchasing and ad placement choices, she says, will ensure their buying decisions, whether of consumer products or advertising space, reflect their preferences to defund the suppliers of misinformation. 

“The vast majority of managers and decision-makers we surveyed want to engage in responsible management practices; making it easier for them to learn where their ads are showing up and act accordingly could help them make more informed decisions,” she adds. 

The Stanford Digital Economy Lab is a center within Stanford HAI. Stanford HAI’s mission is to advance AI research, education, policy and practice to improve the human condition. Learn more. 

Share
Link copied to clipboard!
Contributor(s)
Andrew Myers

Related News

Inside the AI Index: 12 Takeaways from the 2026 Report
Shana Lynch
Apr 13, 2026
News

The annual report reveals a field hitting breakthrough capabilities while raising urgent questions about environmental costs, transparency, and who benefits from the technology.

News

Inside the AI Index: 12 Takeaways from the 2026 Report

Shana Lynch
Economy, MarketsEducation, SkillsEnergy, EnvironmentEthics, Equity, InclusionFinance, BusinessGenerative AIHealthcareRegulation, Policy, GovernanceWorkforce, LaborSciences (Social, Health, Biological, Physical)RoboticsApr 13

The annual report reveals a field hitting breakthrough capabilities while raising urgent questions about environmental costs, transparency, and who benefits from the technology.

Economists Once Dismissed the A.I. Job Threat, But Not Anymore
New York Times
Apr 03, 2026
Media Mention

HAI Senior Fellow and Director of the Digital Economy Lab, Erik Brynjolfsson, speaks about the rapid speed of AI impacts on the economy.

Media Mention
Your browser does not support the video tag.

Economists Once Dismissed the A.I. Job Threat, But Not Anymore

New York Times
Economy, MarketsApr 03

HAI Senior Fellow and Director of the Digital Economy Lab, Erik Brynjolfsson, speaks about the rapid speed of AI impacts on the economy.

What Davos Said About AI This Year
Shana Lynch
Jan 28, 2026
News
James Landay and Vanessa Parli

World leaders focused on ROI over hype this year, discussing sovereign AI, open ecosystems, and workplace change.

News
James Landay and Vanessa Parli

What Davos Said About AI This Year

Shana Lynch
Economy, MarketsJan 28

World leaders focused on ROI over hype this year, discussing sovereign AI, open ecosystems, and workplace change.