Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Radical Proposal: Middleware Could Give Consumers Choices Over What They See Online | Stanford HAI

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
news

Radical Proposal: Middleware Could Give Consumers Choices Over What They See Online

Date
October 20, 2021
Topics
Privacy, Safety, Security

To lessen internet platforms’ power over democratic political debate, a group of Stanford researchers are advocating for a competitive market in middleware.

Google, Facebook and Twitter have control over an enormous proportion of the nation’s political conversation. Increasingly, as these platforms have started moderating political content, their power has been distorting and degrading the quality of American democracy, says Francis Fukuyama, senior fellow at Stanford University’s Freeman Spogli Institute for International Studies (FSI) and director of FSI’s Center on Democracy, Development and the Rule of Law (CDDRL).

“Big private corporations have neither the capacity nor the legitimacy to decide what types of information should be amplified or suppressed,” Fukuyama says.

In a white paper published by the Stanford Cyber Policy Center’s Working Group on Platform Scale, Fukuyama and his colleagues including Ashish Goel, Stanford professor of management science and engineering, and Stanford HAI International Policy Fellow Marietje Schaake, offer a novel proposal to deal with this problem: outsourcing content moderation to a layer of competitive middleware companies that would offer users of these platforms the ability to tailor their search and social media feeds to suit their personal preferences or objectives. “This would give users control over what they see rather than leaving it up to a nontransparent algorithm that is being used by the platform,” Fukuyama says.

The middleware proposal was featured as part of Stanford HAI’s “Policy and AI: Four Radical Proposals for a Better Society” conference, held November 9-10, 2021. Below, watch the full presentation.

Middleware: What Is It?

Middleware is software that rides on top of an existing internet or social media platform such as Google, Facebook or Twitter and can modify the presentation of underlying data. So, for example, a middleware provider might rate the credibility of various sources of information, or filter product searches for items that are eco-friendly or made in America – to fit a particular user’s preferences. “The point would be to give users of these platforms control over what they see in their searches or news feeds, rather than leaving that up to a platform that is completely nontransparent,” Fukuyama says.

Currently, at least one startup company, NewsGuard, has entered this niche in partnership with Microsoft. Their middleware, which can be bought as a browser plugin, rates the credibility of more than 6,000 news and information sources on a numeric scale.

“NewsGuard is an example that we would like to see more of,” Fukuyama says. “Consumers should have a choice among a variety of middleware options rather than leaving it all up to the internet platforms.”

Challenges to Making Middleware a Reality

Technological, business and governmental challenges would need to be overcome for the middleware proposal to be implemented, Fukuyama says.

On the technological front, a relatively small organization would have to be capable of developing AI that could sort through internet platform data in a timely way. The existing platforms argue that only they are up to this task, Fukuyama says. “Until someone actually tries to do it, we won’t know whether that’s true or not.”

Read all the proposals:

Universal Basic Income to Offset Job Losses Due to Automation

Data Cooperatives Could Give Us More Power Over Our Data 

Third-Party Auditor Access for AI Accountability

 

As a business matter, since NewsGuard seems to be the lone example of middleware that evaluates content accuracy, perhaps there isn’t enough consumer demand or enough of a commercial incentive to drive this business model, Fukuyama says. He and his colleagues are in the process of thinking through potential government regulations that would make it profitable for companies to offer middleware services to consumers. Perhaps, for example, to make these kinds of products viable, the government would have to require the platforms to share some of their ad revenue. And because the United States government doesn’t currently have the capacity to do this sort of thing, Fukuyama says, “We think there should be a new specialized agency for digital regulation.”

Middleware’s Potential Impact on Democratic Debate

Critics are concerned that the middleware proposal might make the online misinformation ecosystem worse, Fukuyama says. While some middleware companies would filter out what the mainstream media thinks is fake news, other middleware options would intentionally accelerate fake news.

Given that we live in a country governed by the First Amendment, the objective of public policy cannot be to prevent people from saying things that are false or misleading, Fukuyama says. Middleware doesn’t stamp out fake news, but it would hopefully prevent the big platforms from artificially amplifying it among mainstream Americans. 

“Because these internet platforms are so large, their content moderation decisions have an oversized impact. We just want to reduce that power, not create an alternative source of power that will somehow cleanse American political discourse of falsehoods,” Fukuyama says. “We don’t think that kind of power is safe for anyone to deploy.”

Watch the Presentation

Stanford HAI's mission is to advance AI research, education, policy and practice to improve the human condition. Learn more. 

Share
Link copied to clipboard!
Contributor(s)
Katharine Miller

Related News

Struggling DNA Testing Firm 23andMe To Be Bought For $256m
BBC
May 19, 2025
Media Mention

Stanford HAI Policy Fellow Jennifer King speaks about the data privacy implications of 23andMe's purchase by Regeneron.

Media Mention
Your browser does not support the video tag.

Struggling DNA Testing Firm 23andMe To Be Bought For $256m

BBC
Privacy, Safety, SecurityMay 19

Stanford HAI Policy Fellow Jennifer King speaks about the data privacy implications of 23andMe's purchase by Regeneron.

The Evolution of Safety: Stanford’s Mykel Kochenderfer Explores Responsible AI in High-Stakes Environments
Scott Hadly
May 09, 2025
News

As AI technologies rapidly evolve, Professor Kochenderfer leads the charge in developing effective validation mechanisms to ensure safety in autonomous systems like vehicles and drones.

News

The Evolution of Safety: Stanford’s Mykel Kochenderfer Explores Responsible AI in High-Stakes Environments

Scott Hadly
Privacy, Safety, SecurityMay 09

As AI technologies rapidly evolve, Professor Kochenderfer leads the charge in developing effective validation mechanisms to ensure safety in autonomous systems like vehicles and drones.

A Framework to Report AI’s Flaws
Andrew Myers
Apr 28, 2025
News

Pointing to "white-hat" hacking, AI policy experts recommend a new system of third-party reporting and tracking of AI’s flaws.

News

A Framework to Report AI’s Flaws

Andrew Myers
Ethics, Equity, InclusionGenerative AIPrivacy, Safety, SecurityApr 28

Pointing to "white-hat" hacking, AI policy experts recommend a new system of third-party reporting and tracking of AI’s flaws.