Rebooting the System: Why the Tech Industry Must Change
A few years ago, three Stanford University professors — a philosopher, a computer scientist, and a policy expert — launched what has become one of Stanford’s most popular undergraduate courses: Ethics, Policy, and Technological Change.
That class has now spawned a book intended for the rest of us: System Error: Where Big Tech Went Wrong and How We Can Reboot.
The authors are Rob Reich, professor of political science, director of the McCoy Family Center for Ethics in Society, and associate director of Stanford HAI; Mehran Sahami, professor and associate chair for education in Computer Science; and Jeremy Weinstein, professor of political science and faculty director of Stanford Impact Labs.
Their aim is to reach a broad audience with a powerful message: That as citizens in a democratic society, we can and should shape technology to better align with our shared goals.
Here, Reich discusses the book’s purpose and offers a taste of what we can all do to make a difference.
Why did this team of authors — a philosopher, a policy expert, and a computer scientist — decide to write this book?
Many people feel that the digital revolution has washed over us like a tidal wave and that we have no control over it. We wrote the book because we believe people can and should play an active role in shaping technology.
And as scholars from diverse fields, we also had our individual reasons to write the book. Jeremy Weinstein, who worked in the Obama administration, wanted to help policymakers gain the technical expertise they need to confront the challenges of Big Tech. Mehran Sahami, as a computer science professor who once worked at Google, wanted to help his students enter the workforce with a solid understanding of the potentially negative social consequences of technology, help tech workers ensure the products they're developing are beneficial to society, and help investors make ethical choices in the companies they choose to support. And I, as a political philosopher, wanted to show how citizens can make a difference and democracy can rise to the challenge of harnessing the great benefits of tech while mitigating the harms that are now so obvious to all of us.
Technologists often think about optimization — how to create the most effective tool. What risks does technologists’ optimization mindset carry for a democratic society?
The optimization mindset of the technologist is inculcated early on in a computer science or engineering education. And although it has its place, it often seems to become a worldview in which everything in life is an optimization problem.
Optimization is not inherently good. It’s a means to an end, nothing more. We don’t want to optimize things that are bad. What’s more, optimization depends upon something being measurable mathematically, or computationally tractable, and that can lead engineers to choose imperfect or misleading measures. Facebook, for example, aimed to connect people, but their primary measure of success was the number of people on the platform, which isn’t a good measure of human connection.
Ultimately, though, the biggest problem with the optimization mindset is that it leads people to be skeptical or even cynical about democratic institutions that often seem dysfunctional and suboptimal at delivering the things that we want as citizens. But it's a mistake to think that democratic institutions were ever meant to optimize in the first place. That’s because in a democratic society, citizens have different preferences and values. We cherish pluralism and tolerance for people with different ideas about how best to live. Democracy is not an institutional apparatus for optimizing some outcome. It is essentially a technological design for fairly refereeing and resolving the disagreements we have as citizens, and that's what makes democracy important and valuable.
System Error talks about how values are encoded in technology and the need for democratic institutions to balance competing values. What do you mean by that, and why is it so important?
It’s been said often, but it bears repeating: technology is not value-neutral. There are various ways in which values are encoded in technology itself. For example, end-to-end encrypted messaging platforms like WhatsApp or Signal encode the value of privacy: Neither the company nor a law enforcement agency can get access to the content of your messages. Competing values such as national security or personal safety aren’t encoded into these platforms but could be. And in fact, Apple recognized that recently when it decided to begin scanning photos that people upload to iCloud with the aim of looking for child sex trafficking or child pornography. Presumably it did that because company leaders care about the personal safety of children.
Another more obvious example of encoded values involves the various social media platforms — Twitter, Facebook, Instagram, TikTok, Snapchat — all of which have a commitment to the value of freedom of expression. But the more the social media companies put a thumb on the scale of complete freedom of speech, the more space they offer for hate speech and misinformation. And the reverse is true as well: The more the platforms clamp down on hate speech and misinformation, the less committed their products are to freedom of expression.
Our argument in the book is that, right now, we leave all of these values-based decisions to people inside companies. And given the scale at which these major companies operate, that's not appropriate. Mark Zuckerberg is the unelected governor — dictator, even — of the speech environment for more than 3 billion people. That’s too much power in the hands of one person. The health of our information ecosystem is something that concerns all of us. We have to bring our voices and our democratic institutions into the decision-making story so we can begin to counterbalance the concentrated power of big tech companies and make decisions about these value tradeoffs that reflect the preferences and values of all of us as citizens.
Are there some specific solutions you can point to that might give people a sense that they're not powerless against the juggernaut of power and money that are the tech companies?
We talk in the book about several different actions that people can take. Perhaps most critically, our democratic institutions need to come to the fore. For example, antitrust actions can help check the concentrated power of Big Tech. And policymakers could work on changing the tax incentives to favor hiring and maintaining workers rather than favoring the purchase of machinery that replaces workers.
We also need independent algorithmic accountability or auditing. That means that when algorithmic decision-making is deployed, especially within public institutions, there ought to be an independent set of actors who inspect and audit the decisions to ensure various kinds of fairness or antidiscrimination.
And there’s a need for comprehensive legislation about privacy. We have early examples of this in Europe with the General Data Protection Regulation (GDPR) and in California with the California Consumer Privacy Act, but at the federal level in the United States, we haven't really had that yet.
Tech workers can also try to shape the discussion of important ethical and social concerns inside their companies, such as ensuring that ethical considerations are built into the product development lifecycle. The idea of there being a chief ethics officer is a mistake — as if you can just outsource to a single group within a company the relevant ethical considerations. Ethics is everyone's responsibility.
And, of course, there are things that every individual can do as a user of technology to protect personal privacy online or to choose which products to buy. But we emphasize in the book that putting the responsibility on individuals to make choices about whether to use a product is not an acceptable solution to the problems Big Tech has presented to us. It would be like telling someone who complains about potholes or traffic that they should just stop driving. It suggests that the only choice is between taking the entire system as it is or not using it at all. And what we're trying to argue in the book is that when people say to “delete Facebook,” or “stop using Uber,” it's equivalent to saying “just stop driving if you don't like the highways.” What we need instead is a way to shape these tools, platforms, and devices in ways that reflect all of our interests, not just those of the people in the tech companies.
Stanford HAI's mission is to advance AI research, education, policy and practice to improve the human condition. Learn more.