This spring, scholars at Stanford Institute for Human-Centered AI put forth a call: What are the most radical policy proposals focused on emerging technologies that could respond to the challenges and opportunities of an AI-powered future?
Now some of the most ambitious proposals will be discussed Nov. 9-10 during the Stanford HAI Fall Conference, Policy and AI: Four Radical Proposals for a Better Society:
- Former presidential candidate Andrew Yang: Universal Basic Income to Offset Job Losses Due to Automation
- Divya Siddarth, Microsoft Associate Political Economist and Social Technologist: Data Cooperatives Could Give Us More Power Over Our Data
- Francis Fukuyama, Senior Fellow at the Freeman Spogli Institute for International Studies: Middleware Could Give Consumers Choices Over What They See Online
- Deborah Raji, Fellow, Mozilla Foundation, and CS PhD student, UC Berkeley: Third-Party Auditor Access for AI Accountability
During the online conference, each proposal will be vetted by a panel of experts from academia, industry, government, and civil society. Audience members will be encouraged to ask questions and join the conversation.
“I’m excited about the range of incredible speakers who will be engaging in these four ambitious proposals,” says conference co-host Daniel E. Ho, Stanford Law School professor and Stanford HAI associate director. “Proposers include public intellectuals like Frank Fukuyama and rising stars like Deb Raji, and the panels represent a true range of backgrounds, from politicians to social scientists, and technologists to technology skeptics.”
Here conference hosts Ho and Erik Brynjolfsson, faculty director of the Stanford Digital Economy Lab and senior fellow at HAI, discuss the purpose of the event, who should attend, and what they mean by “radical.”
Why are you hosting this conference? Why have this conversation now?
Ho: Researchers around the world, including many at Stanford, are doing pioneering work, developing powerful technologies transforming the world. But technology doesn’t determine our future. It’s only one piece of the puzzle. We have a responsibility to think critically about the laws and policies needed to bring about a future that reflects the values of ethics, inclusion, and equality that we seek as a community. Only then can we ensure that AI’s impacts lead to the shared prosperity and better quality of life that we hope to achieve.
Brynjolfsson: HAI is committed to shaping the development of AI for the betterment of humanity. This makes this an especially important conference for us to host. Our researchers are working hard to make sure that the AI technologies being developed under our watch are designed with human impact in mind, and we’re creating a model for AI developers that we hope other institutions will follow.
What are your goals for this conversation?
Ho: Our goal with this conference is to move beyond high-level conversations around AI ethics and governance into concrete proposals for reform in four key areas. We’re hoping to spotlight four innovative proposals that address significant challenges in data governance, platform regulation, the impact of AI on labor, and bias in AI.
What do you mean by “radical”? Radical in what sense?
Brynjolfsson: We chose proposals to be “radical” in the sense that they are not small technocratic fixes. They are ambitious proposals that grapple with fundamental problems and will require a change in outlook to adopt. At the same time, we don’t want proposals that are simply “pie in the sky” dreams that have no hope of being implemented.
The best proposals are the ones that are radical enough to make a real difference, but also have a plausible path to actually working.
How did you select these four proposals?
Ho: Earlier this year, we issued a call for proposals and reached out to many colleagues in areas to source radical proposals. We were gratified to see so many ideas from around the world – we saw so much creativity out there. Of course, that meant we received far more proposals than could be presented at our conference, so we drew on the HAI fellows and conference planning group to winnow down the most interesting ideas over a series of meetings
What didn’t make the cut?
Brynjolfsson: Some proposals were compelling, but on very narrow issues. Others had not given much thought to how a big idea might be implemented.
Who should attend this conference?
Brynjolfsson: At HAI, we try to rotate our conferences around our three pillars: human impact, augmenting human capabilities, and emerging research around intelligence. This fall conference focuses on the “human impact” pillar to try to understand emerging legal, regulatory, and policy responses to the impact of AI. Anyone interested in AI governance, technology policy, and regulation might be interested in the conference.
What do you want your audience to take away from this event?
Ho: We hope to provide a rigorous perspective on four visions for how to address core challenges in the technology ecosystem. Policymakers should come away with ideas that are outside the box, which might not be implemented immediately, but can shape the long-term future as policy windows open up.
Stanford HAI's mission is to advance AI research, education, policy and practice to improve the human condition. Learn more.