Skip to main content Skip to secondary navigation
Page Content

Building an Ethical Computational Mindset

Stanford launches an embedded EthiCS program to help students consistently think through the common issues that arise in computer science. 

Image
students work on computers in a lecture hall

Linda A. Cicero

Stanford's embedded ethics program will ensure that more students understand the importance of ethics in a technological context and signal that ethics is integral to their work. 

Technology is facing a bit of a reckoning. Algorithms impact free speech, privacy, and autonomy. They, or the datasets on which they are trained, are often infused with bias or used to inappropriately manipulate people. And many technology companies are facing pushback against their immense power to impact the wellbeing of individuals and democratic institutions. Policymakers clearly need to address these problems. But universities also have an important role to play in preparing the next generation of computer scientists, says Mehran Sahami, professor and associate chair for education in the Computer Science department at Stanford University. “Computer scientists need to think about ethical issues from the outset rather than just building technology and letting problems surface downstream.” 

To that end, the Stanford Computer Science department, the McCoy Family Center for Ethics in Society and the Institute for Human-Centered Artificial Intelligence (HAI) are jointly launching an initiative to create ethics-based curriculum modules that will be embedded in the university’s core undergraduate computer science courses. Called Embedded EthiCS (the uppercase CS stands for computer science), the program is being developed in collaboration with a network of researchers who launched a similar program at Harvard University in 2017. 

“Embedded EthiCS will allow us to revisit different ethical topics throughout the curriculum and have students get a better appreciation that these issues come up in a more constant and consistent manner, rather than just being addressed on the side or after the fact,” Sahami says.  

Once the modules have been successfully implemented at Stanford, they will be disseminated online (under a Creative Commons license) and available for other universities to use or adapt as a part of their own core undergraduate computer science courses. “We hope, through this initiative, to make an engagement with ethical questions inescapable for people majoring in computer science everywhere,” says Rob Reich, professor of political science in the School of Humanities and Sciences, director of the McCoy Family Center for Ethics in Society, and associate director of Stanford HAI. 

Expanding the Curriculum

Teaching ethics to Stanford undergraduate computer science students is not new. Individual courses have been around for more than 20 years, and a new interdisciplinary Ethics and Technology course was launched three years ago by Reich, Sahami, professor of political science in the School of Humanities and Sciences Jeremy Weinstein, and other collaborators. But the Embedded EthiCS initiative will ensure that more students understand the importance of ethics in a technological context, Sahami says. And it signals to students that ethics is absolutely integral to their computer science education.   

The initiative, which is funded by a member of the HAI advisory board, has already taken its first step: hiring Embedded EthiCS fellow Kathleen Creel. She will collaborate with computer science faculty to develop ethics modules that will be integrated into core undergraduate computer science courses during the next two years. 

Creel, who says she feels as if she’s been training for this job her whole life, double majored in computer science and philosophy as an undergraduate before working in tech and then getting her PhD in the history and philosophy of science. 

“Studying computer science changed the way I think about everything,” Creel says. She remembers being delighted by the way her mindset shifted as she learned how to formulate problems, define variables, and create optimization algorithms. She also realized (with help from her philosophy coursework) that each of those steps raised ethical questions. For example: For whom is this a problem? Who benefits from the solution to this problem? How does the formulation of this problem have ethical consequences? What am I trying to optimize?

“One of the hopes behind the Embedded EthiCS curriculum is that as you’re learning this whole computational mindset that will change your life and the way you think about everything, you’ll also practice, throughout the whole curriculum, building ethical thinking into that mindset.”

‘Spaces to Think’

The Embedded EthiCS modules created by Creel and her collaborators will be deployed in one class during the fall quarter of 2020, and two classes in each of the Winter and Spring quarters of 2021. Each module will include at least one lecture and one assignment that grapples with ethical issues relevant to the course. But Creel says she and her collaborators are also working on ways to more deeply embed the modules – so that they aren’t just stand-alone days. 

Topics covered will vary depending on the course, but will include fairness and bias in machine learning algorithms, the manipulation of digital images, and other issues of interpersonal ethics in technology, such as how a self-driving car should behave in order to preserve human life or minimize suffering. Creel says modules will also address how technology should function in a democratic society, as well as “meta-ethical” issues such as how a person might balance duties as a software engineer for a particular company with duties as a moral agent more generally. “Students often want very much to do the right thing and want opportunities and spaces to think about how to do it,” Creel says. 

The goal, says Anne Newman, research director at the McCoy Family Center for Ethics in Society, is “for students to gain the skills to be good reasoners about ethical dilemmas, and to understand what the competing values are – that there are value tensions and how to muddle through those.”

As Reich sees it, “We want the pipeline of first-rate computer scientists coming out of Stanford to have a full complement of ethical frameworks to accompany their technical prowess.” At the same time, he hopes that the many students at Stanford who take intro computer science courses but don’t major in the field will also benefit from understanding the ethical, social, and political implications of technology – whether as informed citizens, consumers, policy experts, researchers, or civil society leaders. “We won’t create overnight a new landscape for the governance or regulation of technology or professional ethics for computer scientists or technologists, but rather by educating the next generation,” he says. 

Stanford HAI's mission is to advance AI research, education, policy and practice to improve the human condition. Learn more

More News Topics

Related Content