Sandy Pentland: AI Should Nurture Communities

In his new book, Shared Wisdom, the scholar outlines the limits of today’s political and social structures, which he considers caught in historical ruts, and discusses how AI might help to rebuild a flourishing community.
Sandy Pentland, a fellow at the Stanford Institute of Human-Centered AI’s Digital Economy Lab, has been thinking for decades about the relationship between people and technology. Beginning in 1985, he helped create the MIT Media Lab, a magnet for futurists, technologists, and designers, all motivated by ideas that bordered on crazy. He pioneered the field of computational social science, which draws on huge amounts of data to shine light on human interaction. Discussions he co-led at the World Economic Forum laid the foundation for Europe’s 2018 data privacy law.
In his new book, Shared Wisdom, Pentland turns his attention to artificial intelligence and the ways in which well-designed AI might nurture communities. The book looks at moments in human history that propelled profound social change — the rise of civilizations, the Enlightenment, and the Scientific Revolution — then draws from these moments lessons about how AI might do the same by supporting the fundamental human capacity for deliberation.

In Shared Wisdom Pentland offers insights into how we can use technologies like AI to aid, rather than replace, our human capacity for deliberation.
When it comes to AI, “most people in Silicon Valley seem to either be doomers or think we’re all going to be lotus eaters,” Pentland says. He thinks both of these notions are wrong. Used properly, he believes the technology might help “build an interesting, healthy, and innovative society that restores our communities.”
The interview below, which touches on some of the book’s core themes, has been edited for length and clarity.
In your book, you talk a lot about political structures, like representative democracy, and their relationship to technology. What drew you to look at these institutions right now?
Bureaucracies are modeled after armies of the 1700s. In the army you do what you’re told. There’s no question. As society develops further along these lines, it becomes more oppressive in various ways and it extinguishes innovation.
If we’re going to get through challenges like global warming and plastics pollution and authoritarianism — and AI, for that matter — then we need to have a lot of innovation in our social structure. We don’t simply need tools. That’s the main theme of the book. Everybody talks about tools, like IT tools, or flying cars. No. Those kinds of technologies emerge as a result of our social structure. When you have a society that supports communities that are trying different ways of living, you get innovation in social institutions. Some of these new ways of living, as we saw in the Enlightenment, discover new patterns in nature and spin off new tools that can help us live better. This is what was meant when the founders of the U.S. called the states “laboratories of democracy.”
And what do we need to innovate social structure? AI gives us the possibility of doing that cheaply and effectively. It gives us the possibility of creating and nurturing the kind of community that we’re evolved for.
The same is true of democracy. Our democracy was a really good design for 1780, when towns were a couple of thousand people you knew and your representative to Congress lived nearby with their kids. Today, the top of federal government comes from a class of people whose lives are completely different from those of most people. This is not representative democracy. Today’s leadership doesn’t know what people really think and want. To make democracy work today, we need to have tools like AI that allow us to have more effective discussions about how to get along with our neighbors and to have this discussion continuously and inclusively.
Shared Wisdom suggests some ways AI can help support these discussions. What are your recommendations?
As an example, we built deliberation.io, which we're using with the city of Washington, D.C. This is an AI-driven online platform that supports everyday deliberation: Everybody in the community can contribute opinions about things and can see what other people think, and this tends to produce a level of consensus that supports collaborative and collective action.
Most recently, in D.C., we asked citizens how they want AI to be used in government. People told us they wanted AI agents to help them deal with the complexities of government and bureaucracy — to fill out the forms, to tell them when things are ready, to make sure they’re not screwing up.
And I understand that. Much of my government interaction is a nightmare, and I have a PhD. What do you do when you have two jobs and three kids? This is what we want: AI that emerges from community discussion, that strengthens community and political institutions.
As AI advances, so do its rules and regulations. In your book, you suggest that rather than relying on top-down laws defining appropriate use of AI, we lean on liability law to encourage beneficial uses. Why is that a better route?
Well, it’s the way we regulate technologies like electricity. There are very few laws about electricity per se, but there are laws about hurting people. It took us a while to figure out how to make wires that don’t kill people. But we did it. And now, if you don't build a house with the correct wires, you get sued.
Cars are like this, too. There are a few laws directly regulating car construction, but as we figured out what caused harm, we developed regulations limiting speed and requiring seatbelts and airbags.
I think this is what we need to do with AI. And there are two basic steps to this. The first is transparency: We need to know what AI is doing, so there need to be audit trails just as we have with money. These should be public, so that anybody can see if some company is ripping you off or if some AI is making bad decisions. The second step is a lot more accountability. When an AI service is causing harm, then the person or organization selling the AI service should be liable for damages and the audit trails should make it easy to collect compensation.
This sort of transparency lets us see where AI is doing well and where it’s not, because, honestly, we don’t really know right now. Take AI and education: Our educational systems are going to be changed dramatically. But I don’t think there are coherent visions of how AI should be applied to the challenge. To figure out what works, we need to try things and then actually keep track of what’s happening and make that accessible.
Basically, I'm advocating for regulation to focus on transparency and accountability. If you don't have those, you don't have anything.
What are the first few practical steps toward that transparency and accountability?
California just passed a law that says all big companies have to publish their security and accountability procedures. That’s the zeroeth step.
I would say the next thing you do is develop auditing and a regime where the results are public. Some of the big AI companies, like Anthropic and OpenAI, are beginning to do this. But we need much more.
Most people in Silicon Valley seem to either be doomers or think we’re all going to be lotus eaters. I think both are wrong and, instead, what's going to happen is that people are going to use AI in various ways — and some of those will be bad ways. We can figure out the bad ways either quickly or slowly. I think quickly is better, so that’s what I’m working toward.
But the key thing to me is reinforcing human connection. We have to talk to each other better, and we need collective action. Those two things are vital for the survival of our species. I think properly designed AI systems can support that.

