Skip to main content Skip to secondary navigation
Page Content

Democracy and the Digital Transformation of Our Lives

In a new book, scholars across multiple disciplines reflect on how digital technologies intersect with democratic institutions, and offer a broad agenda for future research.

Image
People sit outside, the photo focused at the hands in their laps, holding coffee mugs and American flags

Larry White | Pixabay

How can modern-day democracies mitigate risks from digital technologies that have undermined recent elections?

Every citizen is aware that digital technologies have transformed our individual and collective lives. But democratic theorists have been slow to take stock of this transformation and to trace how democratic theory and institutions should respond. The new book Digital Technology and Democratic Theory, edited by Stanford Institute for Human-Centered Artificial Intelligence Associate Director Rob Reich, Stanford Digital Civil Society Lab Director Lucy Bernholz, and Yale professor Hélène Landemore, brings together a multidisciplinary group of scholars — across political philosophy, social science, and engineering — to weigh in on the implications of digital technologies for democratic societies as well as ways in which democracies might be enhanced by these advances.

Here, Reich, who is also a professor of political science at Stanford School of Humanities and Sciences, director of the Center for Ethics in Society, and co-director of the Center on Philanthropy and Civil Society, discusses the book’s purpose, reach, and takeaways.

What are the high-level takeaways from the book?

Digital Technology and Democratic Theory book cover artwork

We had at least a decade of techno-utopianism in which digital technologies were thought to be inherently liberating, that they would spread democracy across the world, and that they would enrich individual lives in some unparalleled fashion. And then we switched to a decade of techno-dystopianism in which digital technologies hijacked our attention, violated our privacy, corroded our very souls, and undermined democratic societies.

This volume takes a mature approach to thinking about the intersection of digital technology and democratic theory, so that we can better understand how to harness digital technology’s great benefits and mitigate or contain the potential risks.

We call upon readers, just as has historically happened with earlier eras of technological revolution, to avoid the polar extremes of thinking about the development and deployment of technology as uniformly good or bad. This is a book for people who want to take a longer view — pondering the implications of technology for democratic institutions over the next 10 to 50 years rather than reacting to the newest unicorn or the scandal du jour. It’s also a book for scholars across the world who can find in this volume a rich and fertile set of research agendas to pursue as well as an appreciation for the ways in which cross-disciplinary consensus can help guide where our attention should be paid.

You and your co-authors say that democratic theorists haven’t really figured out if social media companies are publishers, news organizations, or a new form of “private government” or even “private superpower.” Why is it so difficult to get a clear understanding of the power wielded by the tech industry?

Social media platforms are certainly powerful. In the book, we quote from a Stanford-affiliated scholar from Oxford, Timothy Garton Ash, who says, “The policies of Facebook and Google are more consequential for permissible speech than is anything decided by Canada, France, or Germany.” Indeed, he says, big tech firms are the new “private superpowers.”

These are the great public squares of our 21st-century digital age. And as a result, the private power of the CEOs of these companies to determine permissible or impermissible content or to design the algorithms that uprank and downrank content means they shape the information ecosystems of citizens across the world. That’s an extraordinary form of power that currently has almost no form of accountability attached to it. 

The decision by all the major social media companies to ban Donald Trump from posting, and then deleting his account, in the wake of the January 6 insurrection at the U.S. Capitol is just the latest proof of this extraordinary power in the hands of a few people at a few large companies.

We can’t decide what to do about social media companies, or how to rein in their power, until we have a clear understanding about their actual function and purpose. 

Some say that Google, Facebook, Snapchat, TikTok, or Twitter are like the telephone company — a conduit that connects people and makes communication possible. When two people plot a crime on the phone, no one blames the phone company. Is that what a social media platform is? 

Clearly not. The core function of these platforms — curating and upranking and downranking information for us — makes them different from the telephone company.

Some would say that the social media platforms are distributors of content that people consume. That they should abide by the kinds of professional norms or standards that newspapers, television shows, or radio programs rely on when they make judgments about what should be published. But unlike newspapers and other mass media, social media platforms don’t create content — users do.

So, we are left with the question, what are these platforms? The answer is that their core function is algorithmic sorting or curation. And this allows for great amplification of content and the possibility of privileging virality over veracity. And, of course, their function is also to sell advertising based upon a massive collection of data about our online behavior. 

As a result of not having a clear-eyed view of what platforms are or how they wield the power they do, we don’t yet have a clear understanding of how to govern them. And that’s part of the great debate we see playing out today about such things as privacy policies, misinformation and disinformation, CDA 230 [section 230 of the Communications Decency Act], political advertising, and so on. 

The book’s introduction describes one view of tech company leadership as “a band of ahistorical, techno-libertarian merry pranksters and sociopaths.” If these are the people with so much power, how can one avoid feeling dystopian, especially during a global pandemic?

That sentence was meant to capture the spirit of the techno-dystopian rhetoric that is so common today. My view is that we should stop focusing on the personalities of tech founders. And we should start focusing on the influence of concentrated tech power over the rest of our lives.

We have a big lesson to learn from the coronavirus pandemic. The pandemic and work-from-home conditions should remind us of how essential digital technologies are and how dependent upon them we have become. The work productivity that’s been possible because of videoconferencing compared to what would have been possible 10 years ago is owed to digital technology. The same is true for connecting with family and friends across the country who we can’t see. Not to mention all of the AI tools that have been essential for identifying therapies and vaccine candidates for the coronavirus. 

So that’s partly why I would like to say we’re coming out of a dystopian sensibility. Perhaps the coronavirus can remind us that rather than being uniformly bad, these technologies have become something like the essential infrastructure that has allowed certain elements of our lives to continue during the pandemic. And now is the time to have this mature and sober perspective and to get serious rather than to indulge in utopianism or dystopianism.

Are there ways in which digital technologies might be used to enhance democratic institutions?

Rather than addressing the need to have democratic societies govern digital technologies before they govern us, some of the chapters in this book look at the ways digital technologies can be incorporated into democratic institutions for the purpose of enhancing the performance of democracy itself. 

Indeed, digital technology can be put in the service of democracy and expand how we think about the operation of democratic societies. For example, one of the co-authors, Hélène Landemore, a political philosopher at Yale, contributed a chapter about ways in which digital technologies might help us move beyond representative democracy itself. In essence, she explores alternatives to holding elections in which our elected representatives go off and do the business of the people and then citizens do nothing except show up again in a few years to cast another vote. Are there ways in which we might crowdsource, Wikipedia style, the writing of a constitution with people across the world contributing to the writing and editing of our very laws? Or ways in which citizen assemblies can happen online as a complement to or possibly replacement of elected representatives? She shows that this is not merely possible, but that it has already been done, and to some good effect.

Again, this is a way of looking further into the future — as a way to enlist digital technology not as a threat to democracy but as a handmaiden to it.

The book calls for the training of “public interest technologists.” What do you mean by that and what role would these people play in our democracy?

We’re all familiar with the idea of public interest lawyers — people who get a law degree and then work on behalf of the public interest, whether it’s through a public advocacy or other civil society organization. At the moment, engineering schools and computer science departments tend to pay lip service to the idea that you should acquire technical skills and then deploy them on behalf of public agencies. Most people who receive computer science training go to work at tech companies. And our universities, including Stanford, facilitate that through their recruitment programs that give unequal access to tech companies. It’s much harder to get a lower-paying job in a public agency as a Stanford computer science major than it is to get a higher-paying job at a startup or big tech firm. 

So the option of being a public interest technologist would open up the computer science and engineering career pipeline to multiple destinations. It’s clear that technical skills are extraordinarily important within public advocacy organizations and public agencies. Imagine what the world would be like if Amnesty International, Partners in Health, the United Nations, or various governmental agencies could hire people with the technology talent that Google and Facebook get. Wouldn’t it be nice to have a world in which that was seen as just as important as — or more important than — deploying your talent for big tech or the promise of a payday in a startup company? 

Technologists often complain that democracy is too slow and the people who impose policies are never sufficiently informed; they always use a hammer instead of a well-crafted tool; Washington, D.C., is always 10 to 20 years behind on the frontier of technology. That’s why we need a new generation of people who have learned technology alongside social science, ethics, and democratic theory. 

The book suggests that multidisciplinary collaborations will be a fruitful research pathway. Why is such work so important?

Above all else, this is a book that we hope exhibits the enormous importance and promise of putting philosophers, social scientists, and technologists in conversation with one another. 

Stanford HAI is premised on the idea that the development of AI will be human-centered when AI scientists work alongside social scientists and humanists rather than inviting the social scientists and humanists to study the effects of AI on the world after the technologists have invented and released it. The same is true for digital technology and democratic theory.

I would like to see a world in which democratic theorists don’t offer lectures to technologists about what they should do better in order to support democracy, but instead work alongside them to understand their perspectives. And reciprocally, technologists shouldn’t invite democratic theorists to admire their extraordinary innovations and disruptions and then say it’s their job to do something about it and to keep up with the pace of innovation.  

Digital technology will develop in a better way when done in tandem with democratic theorists, and democratic practice will be better when pursued in tandem with technologists. 

Stanford HAI's mission is to advance AI research, education, policy and practice to improve the human condition. Learn more

More News Topics

Related Content