Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
2021 Fall Conference on Policy & AI: Four Radical Proposals for a Better Society | Stanford HAI

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
eventConference

2021 Fall Conference on Policy & AI: Four Radical Proposals for a Better Society

Status
Past
Date
Wednesday, November 10, 2021 9:00 AM - 5:00 PM PST/PDT
Overview
Speakers
Agenda

This year’s virtual fall conference features a novel format. We will present and discuss four policy proposals that respond to the issues and opportunities created by artificial intelligence.

Each policy proposal will be a radical challenge to the status quo and capable of having a significant and far-reaching positive impact on humanity. The proposals will be presented to a panel of experts from multiple disciplines and backgrounds, who will vet, debate, and judge the merits of each proposal. We will also encourage audience participation throughout.

Watch Event Recordings

Day 1

Day 2

Learn More About The Four Proposals

Four Radical Policy Proposals to Shape AI’s Impact on Society

Radical Proposal: Middleware Could Give Consumers Choices Over What They See Online

Radical Proposal: Universal Basic Income to Offset Job Losses Due to Automation

Radical Proposal: Data Cooperatives Could Give Us More Power Over Our Data

Radical Proposal: Third-Party Auditor Access for AI Accountability

Other Radical Proposals

We received nearly 100 proposals in response to our public call last spring. While we only have room to feature four proposals at our Conference, we are thrilled to see so much energy for making our society better and grateful to everyone who took the time to submit. In addition to the four we’ve selected to feature, we'd like to highlight six more. We encourage you to read through these proposals, and we hope they stimulate more “outside-of-the-box” ideas and policy innovations.

Overview
Speakers
Agenda
Share
Link copied to clipboard!
Event Contact
Celia Clark
celia.clark@stanford.edu
Related
  • Erik Brynjolfsson
    Jerry Yang and Akiko Yamazaki Professor | Senior Fellow, Stanford HAI | Senior Fellow, SIEPR | Professor, by courtesy, of Economics; of Operations, Information & Technology; and of Economics at the Stanford Graduate School of Business

Related Events

AI+Education Summit 2026
ConferenceFeb 11, 20268:00 AM - 5:00 PM
February
11
2026

The AI Inflection Point: What, How, and Why We Learn

Conference

AI+Education Summit 2026

Feb 11, 20268:00 AM - 5:00 PM

The AI Inflection Point: What, How, and Why We Learn

Tom Mitchell | The History of Machine Learning
Feb 23, 202612:00 PM - 1:00 PM
February
23
2026

How did we get to today’s technology which now supports a trillion dollar AI industry? What were the key scientific breakthroughs? What were the surprises and dead-ends along the way...

Event

Tom Mitchell | The History of Machine Learning

Feb 23, 202612:00 PM - 1:00 PM

How did we get to today’s technology which now supports a trillion dollar AI industry? What were the key scientific breakthroughs? What were the surprises and dead-ends along the way...

Gaidi Faraj, Lofred Madzou | Nurturing Africa’s AI Leaders through Math Olympiad
SeminarFeb 25, 202612:00 PM - 1:15 PM
February
25
2026

The African Olympiad Academy is a world-class high school dedicated to training Africa’s most promising students in mathematics, science, and artificial intelligence through olympiad-based pedagogy.

Seminar

Gaidi Faraj, Lofred Madzou | Nurturing Africa’s AI Leaders through Math Olympiad

Feb 25, 202612:00 PM - 1:15 PM

The African Olympiad Academy is a world-class high school dedicated to training Africa’s most promising students in mathematics, science, and artificial intelligence through olympiad-based pedagogy.

We propose novel “in-situ” data access rights, instead of data portability, to accelerate competition, innovation, and data use by bringing AI and ML algorithms to data instead of the reverse.

By Marshall Van Alstyne, Geoffrey Parker, Georgios Petropoulos and Bertin Martens

What is your policy proposal? Who defines data rights? How do data rights drive competition, growth, and innovation? US and EU legislation (e.g., CCPA, GDPR) seeks to empower individuals and boost competition via data portability rights. This is a partial step. By contrast, our proposal provides new and stronger data ownership rights with specific principles to help users capture value created from their data, while increasing privacy, competition, and innovation. We introduce a new “in-situ” right that allows businesses and individuals to police and act on their own data where it resides. In particular, we propose that data owners can authorize third parties to access live data on their behalf. This brings AI algorithms to data rather than data to algorithms, while solving major problems with data portability – obsolescence, moral hazard reporting, non-actionability, and security. It also facilitates competition among firms using AI to create network effects and increase innovation.

What problem does your proposal address? Our proposal solves (1) the competition problem that gatekeeper control over data forecloses market entry, reducing welfare, and (2) the efficiency problem that information asymmetry blocks 3rd party reuse, reducing innovation. Thus our proposal resolves the tension between data aggregation that yields AI efficiency and performance benefits and the data aggregation that yields market foreclosure and abuse of dominance. Finally, (3), our proposal makes it possible to evaluate algorithms for bias because they execute within one central infrastructure.

How does this policy proposal relate to artificial intelligence? Our proposal improves AI access, ethics and transparency, while allowing training on much larger datasets than portability. When users grant permission, “gatekeepers” must allow 3rd party access to data they hold on users’ behalf. This solves AI ethics and transparency problems by enabling inspection of the algorithm to detect bias or unscrupulous behavior. Under our proposal, users can also punish bad behavior or terminate access they no longer wish to grant. By contrast, data portability moves the data to the third party where users cannot be certain how their data are used or of its deletion upon request.


To build equitable AI systems, we need advanced market commitments for training data. These AMCs will spur private innovation in collecting training datasets that are representative of the underlying population.

By Abhilash Mishra and Bhasi Nair

What is your proposal? A number of problematic applications of AI (from facial recognition to assessing risk of heart attacks) can be tied to lack of representative training data for algorithms. Can we create incentives for the private sector to collect population-representative data? A key bottleneck in this approach is the lack of a "guaranteed buyer" for these datasets which would prevent private companies to invest in collecting representative datasets. Creating an advanced market commitment where the govt (or a philanthrophic partnership) pays for the collection of equitable datasets can help fix this market failure.

What problem does your proposal address? The proposal helps address the challenge of building equitable AI systems by ensuring training data used in these systems represent the population these systems want to serve.

How does this policy proposal relate to artificial intelligence? AI uses training data and biases in training data can lead to problematic applications of AI.

Per recent EU AI Regs, transparency is a recognized requirement for high risk AI applications. Methods for model self identification and broadcasting will ensure centralized data disclosure and regulator transparency.

By Pamela Jasper

What is your proposal? AI models are built by algorithms to replicate human intelligence. But unlike humans AI models are not self referencing and generally do not have methods of self identification. Smart Models is a method of centralized management and disclosure of AI models wherein each AI model will contain a unique model id within its API code. Upon live use the model will broadcast to a central server live intra day messages of the models usage , status and other vital information, including explainable decisions. The models will act like an aircraft transponder during flight. The model broadcast receiver will exist either inside or external to the developing firm. External receivers at regulatory bodies can monitor real time model usage, bypassing faulty self assessments or audits. Based on the work of NYU Professor Jon Hill.

What problem does your proposal address? A major problem for high risk AI (those used in health care, criminal justice, lending, housing, education, etc.) is the need for reg oversight of model activity and end consumer user knowledge of when AI is being used. This proposal would create a required central repository of model usage for live models, thus serving as a means of informing and verifying end user exposure to the AI model, and facilitating reg oversight and insight into its usage. This is a data intensive proposal, similar to FINRA oversight of all stock exchange trades.

How does this policy proposal relate to artificial intelligence? Smart models can be used for all AI model types AI, ml, dl. Facilitates transparency, governance and provides industry data to compare usage stats. Right now the regulators rely upon self reported data from AI tech firms if they provide it. It would transform regulatory technology and foster USA consumer confidence in AI. 

We design a pareto-improving universal basic income policy harnessing the first large-scale computable general equilibrium model of endogenous global technological change.

By Victor Ye and Seth Benzell

What is your proposal? We build a generational CGE model featuring 2.6 million agents, 17 regions, and three skill groups. Automation is simulated by allowing firms to adopt new production technologies - calibrated by projecting forward historical trends of technological change in the U.S. - as they are invented. We utilize this model to propose a UBI to equitably redistribute gains from automation. Specifically, a transfer of up to 5.1 percent of U.S. GDP, debt-financed for 20 years and paid for with progressive income taxation thereafter, provides a Pareto welfare improvement to all American workers regardless of age or skill.

Versus no automation at all, global automation benefits the U.S. economy, producing growth and capital deepening at the cost of greater inequality. Our proposed UBI guarantees that low-skill workers of all generations are weakly better off, while still allowing for 3-15% of welfare gain for high-skilled workers compared to a scenario without automation.

What problem does your proposal address? Our UBI policy is designed to address inequality induced by skill-biased technological change. Our proposal differs from other UBI proposals in that it is precisely constructed based on state-of-the-art estimates of the impact of automation on firms, intergenerational redistribution mechanisms, progressivity of existing redistribution mechanisms, and the effects of global capital flows. We also analyze the welfare consequences of UBIs in a general equilibrium framework with different choices of taxes and incidence and with calibrations of deadweight loss. This allows us to quantitatively estimate the scale of transfers required to make each method of financing the UBI Pareto-improving.

How does this policy proposal relate to artificial intelligence? AI plays a critical role in projected trends of automation - as we model it, a shift of production input shares to capital and high-skilled labor - by replacing costly menial labor with a relatively small amount of developer time and cheap processor time. We consider several different automation scenarios, with different assumptions about how quickly which types of AI technologies will advance (e.g. machine vision vs. robotics). By combining the `suitability for machine learning’ scores of Brynjolfsson, Mitchell and Rock (2018) with international statistics on occupation, we can simulate how different scenarios will differentially affect regions, skill-groups and age cohorts.

When AI companies develop new technologies, they should be required to perform a distributive impact assessment to ensure that inventions enhance human job opportunities rather than solely displacing human workers.

By Katya Klinova and Stephanie Bell

What is your proposal? We propose regulation requiring AI companies to measure and disclose their impact on labor demand: how many jobs have been created or eliminated, made better or worse (in terms of wages and other key quality indicators) as a result of their existence. Measuring the labor demand impact would allow regulators to incentivize the development of AI applications that genuinely complement workers, boost productivity, and support creation of good jobs, and tax/disincentivize those that do not meaningfully grow productivity, but do transfer economic power from labor to capital by cutting labor costs, impoverishing workers and devastating their communities. The presence of incentives and disincentives as well as societal pressure enabled by the transparency around AI companies’ labor demand impacts would prompt companies to adopt targets around non-destruction of good jobs, proactively anticipate the likely impacts on labor demand of their product pipeline, and adjust it to meet the targets.

What problem does your proposal address? Popular approaches to AI and the future of work fall into a series of traps. Proposals to reskill workers whose jobs are automated ignore economic research showing technology is displacing tasks faster than it replaces them. Proponents of redistribution-reliant strategies encounter two major problems: 1) today’s automation is often “so-so,” offering minimal productivity gains to redistribute, and 2) they lack feasible political paths to replace workers wages while covering for technology that destroys worker livelihoods globally. We propose “pre-distribution”: incentivize creation of genuinely worker-complementing AI that expands productivity frontiers and economic gains, while maintaining robust demand for waged work.

How does this policy proposal relate to artificial intelligence? It is not assured that AI’s gains will accrue to humanity at large as opposed to a small number of actors in the AI industry. At this early stage of AI development, we can prevent its worst economic impacts on workers—rather than forcing society to respond to mass labor market disruptions that don’t need to occur. Our proposal would ensure that AI companies do not capture windfall gains at society’s expense, and incentivize them to steer AI’s progress toward shared prosperity. Measuring and disclosing companies’ impact on labor demand would also help better differentiate genuinely human-augmenting AI from empty claims.

Visit AI and Shared Prosperity Initiative to learn more.

Bots could one day dispense medical advice, teach our children, or call to collect debt. How can we avoid being deceived by actors with bad intentions? 

Imagine you are on the phone with an imperious and unpleasant debt collector. She knows everything about your financial history, but has no sympathy for your situation. Despite your increasingly frantic complaints, she simply provides a menu of unattractive repayment options. Would it temper your anger to know conclusively that she was not a hostile human being, but simply a bot tasked with providing you with a set of fixed options? Would you want the power to find out whether she was human?

We are quickly moving into an era in which artificial agents capable of sophisticated communication will be everywhere: collecting debts, dispensing advice, and enticing us to make particular choices. As a result, it will be increasingly difficult to distinguish humans from AIs in conversation or written exchanges. We are concerned that this constitutes a major change in social life, and presents a serious threat to fundamental aspects of our civil society. 

To help preserve trust and promote accountability, we propose the shibboleth rule for artificial agents: All autonomous AIs must identify themselves as such if asked to by any agent (human or otherwise).

The Case of Google Duplex

In May 2018, Google made headlines with demos of Google Duplex, a virtual assistant that can conduct realistic conversations over the phone in a handful of everyday scenarios like scheduling appointments. One striking feature of Duplex is its use of filled pauses (for example, “um” and “uh”). Google described this as part of “sounding natural.” Others interpreted it as “human impersonation.” In tests conducted by the New York Times in 2019, the deception seemed to run deeper: Duplex claimed Irish heritage and explicitly denied being a robot. The Verge reported on similar tests and found that people were routinely tricked into thinking Duplex was a human.

Read related: When Artificial Agents Lie, Defame, and Defraud, Who Is to Blame?

 

When Duplex debuted, it was immediately met with concerns about how it might reshape our society. Three years later, Duplex is available in numerous countries and 49 U.S. states, and it reportedly functions more autonomously than ever before.

And Google isn’t the only company experimenting with ever more realistic bots to interact with customers. In the near future, you will wonder: Am I getting medical advice from a physician or a bot? Is my child’s online classroom staffed with teachers or AIs? Is this my colleague on the call or a simulation? As AIs become more adept, they will become irresistible to numerous organizations, as a way to provide consistent, controlled experiences to people at very low costs.

It is now abundantly clear that sustained, coherent conversation from an AI does not imply that it has any deep understanding of the human experience, or even sensible constraints to avoid troubling behavior. This combination of traits could make AI agents extremely problematic social actors. The history of the Turing test shows that humans are not able to reliably distinguish humans from AIs even when specifically tasked with doing that, and even when the AIs are not especially sophisticated. More recent research with a top-performing language model (GPT-3) suggests that people can’t distinguish model-generated text from human-written text without special training. 

When people are not specifically tasked with looking for a bot, the task of detecting one may be even harder. One of the lessons of cognitive science is that even from infancy, humans are expert at attributing agency – the sense that something is able to act intentionally on the world – and do so pervasively, recognizing and naming mountains, trees, and storms (as well as cars and computers) as agents. So perhaps we are distinctively easy targets to be deceived by AI agents.  

However, humans tend to be more adept at reorienting themselves once they know definitively whether an agent is a human or AI. Our AI shibboleth rule would ensure that they could always obtain this vital information.

Our Modest Proposal

Our proposed shibboleth rule is simple:

Any artificial agent that functions autonomously should be required to produce, on demand, an AI shibboleth: a cryptographic token that unambiguously identifies it as an artificial agent, encodes a product identifier and, where the agent can learn and adapt to its environment, an ownership and training history fingerprint.

Read related: A Moderate Proposal for Radically Better AI-powered Web Search

 

A very similar proposal was made by Walsh 2016, with a rule called "Turing's Red Flag". This rule is already on its way to becoming a reality. For example, in July 2019, California became the first U.S. state to enact legislation to make it unlawful to use a bot to intentionally mislead someone online to incentivize a purchase or influence a vote. For this legislation to have real, enforceable consequences, there must be a specific, actionable test to prevent deception: the shibboleth.

Further, the shibboleth must encode information about the provenance of the agent and its history of ownership and usage. This information provides a potential solution for concerns about tracking and attributing responsibility to agents, especially those that adapt to their environments and thus begin to behave in ways that are unique and hard to predict. 

Questions and (Unintended) Consequences

Our primary goal for the shibboleth rule is simply to avoid ambiguous, frustrating, and potentially deceptive situations that can make it even harder for people to navigate difficult situations or allow unscrupulous actors to engage in troubling practices. Yet we expect such a rule to have wide-ranging consequences. For example, it would likely create a societal pressure to keep AIs out of specific roles, even if it is technically lawful for them to be in those roles – users would have the information necessary to complain. Perhaps “fully human” agents could even become a verifiable marker of a deluxe customer service experience. On the other hand, we may discover scenarios in which people increasingly prefer AIs, who can perhaps be tirelessly polite and attentive. We also expect the shibboleth rule to help us grapple with challenging issues of agency and intention for artificial agents, especially those that can adapt to their environments in complex ways.

Beyond these economic effects, though, our proposal opens up further questions about human–AI interactions. Are there cases where it is ethical to avoid revealing that an interacting agent is an AI; for example, when the AI agent is serving as a crisis counselor whose efficacy critically depends on being thought to be human? Conversely, are there situations in which AI agents should preemptively identify themselves as such?

What counts as an agent? What counts as autonomous? The shibboleth rule might force decisions about complex cases. Although Duplex is clear-cut, many hybrid systems will soon present difficult boundary cases, as when customer service agents manage chat bots that use GPT-successors to generate seamless prose from terse suggestions. More generally, the boundaries of agency will continue to be an important area for researchers and for the law. Would an artificial biological system count as an artificial agent for our purposes? What about a human with extensive neural implants? Cognitive scientists have debated the boundaries of intelligence for years, and this body of theory may see new life as we decide whether thermostats with voice recognition have to identify themselves as autonomous agents.

Implementations of the shibboleth rule will also have to grapple with other human consequences. Perhaps human callers will pretend to be bots to avoid culpability of some sort. How shall we prevent them, either legally or practically, from fraudulently producing shibboleth tokens? 

Finally, the shibboleth rule may have consequences for some of the already existing applications of bots in non-interactive broadcasting environments, including posting on social media. Bots disproportionately contribute to Twitter conversations on controversial political and public health matters, with humans largely unable to distinguish bot from human accounts. Might the shibboleth rule be a way to curb the viral spread of misinformation?

An Ongoing Conversation

In the example of Google Duplex, we can begin to discern the potential value of our shibboleth rule, and that value is only going to increase as we see more agents like Duplex deployed out in the world. However, the full consequences of such a rule are hard to predict, and implementing it correctly would pose significant challenges as well. Thus, we are offering it in the spirit of trying to stimulate discussion among technologists, lawmakers, business leaders, and private citizens, in hope that such discussion can help us grapple with the societal changes that conversational AIs are rapidly bringing about. What makes little sense to us is to ignore the pace of innovation in the capabilities of artificial agents or the need for at least some clear rules to curb the downside risks of a world filled with these agents. 

Authors: Christopher Potts is professor and chair of the Stanford Humanities & Sciences Department of Linguistics and a professor, by courtesy, of computer science. Mariano-Florentino Cuéllar is the Herman Phleger Visiting Professor of Law at Stanford Law School and serves on the Supreme Court of California. Judith Degen is an assistant professor of linguistics. Michael C. Frank is the David and Lucile Packard Foundation Professor in Human Biology and an associate professor of psychology and, by courtesy, of linguistics. Noah D. Goodman is an associate professor of psychology and of computer science and, by courtesy, of linguistics. Thomas Icard is an assistant professor of philosophy and, by courtesy, of computer science. Dorsa Sadigh is an assistant professor of computer science and of electrical engineering.

The authors are the Principal Investigators on the Stanford HAI Hoffman–Yee project “Toward grounded, adaptive communication agents.” Learn more about our Hoffman–Yee grant winners here.