Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Building an Ethical Computational Mindset | Stanford HAI

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
news

Building an Ethical Computational Mindset

Date
October 05, 2020
Topics
Education, Skills
Linda A. Cicero

Stanford launches an embedded EthiCS program to help students consistently think through the common issues that arise in computer science. 

Technology is facing a bit of a reckoning. Algorithms impact free speech, privacy, and autonomy. They, or the datasets on which they are trained, are often infused with bias or used to inappropriately manipulate people. And many technology companies are facing pushback against their immense power to impact the wellbeing of individuals and democratic institutions. Policymakers clearly need to address these problems. But universities also have an important role to play in preparing the next generation of computer scientists, says Mehran Sahami, professor and associate chair for education in the Computer Science department at Stanford University. “Computer scientists need to think about ethical issues from the outset rather than just building technology and letting problems surface downstream.” 

To that end, the Stanford Computer Science department, the McCoy Family Center for Ethics in Society and the Institute for Human-Centered Artificial Intelligence (HAI) are jointly launching an initiative to create ethics-based curriculum modules that will be embedded in the university’s core undergraduate computer science courses. Called Embedded EthiCS (the uppercase CS stands for computer science), the program is being developed in collaboration with a network of researchers who launched a similar program at Harvard University in 2017. 

“Embedded EthiCS will allow us to revisit different ethical topics throughout the curriculum and have students get a better appreciation that these issues come up in a more constant and consistent manner, rather than just being addressed on the side or after the fact,” Sahami says.  

Once the modules have been successfully implemented at Stanford, they will be disseminated online (under a Creative Commons license) and available for other universities to use or adapt as a part of their own core undergraduate computer science courses. “We hope, through this initiative, to make an engagement with ethical questions inescapable for people majoring in computer science everywhere,” says Rob Reich, professor of political science in the School of Humanities and Sciences, director of the McCoy Family Center for Ethics in Society, and associate director of Stanford HAI. 

Expanding the Curriculum

Teaching ethics to Stanford undergraduate computer science students is not new. Individual courses have been around for more than 20 years, and a new interdisciplinary Ethics and Technology course was launched three years ago by Reich, Sahami, professor of political science in the School of Humanities and Sciences Jeremy Weinstein, and other collaborators. But the Embedded EthiCS initiative will ensure that more students understand the importance of ethics in a technological context, Sahami says. And it signals to students that ethics is absolutely integral to their computer science education.   

The initiative, which is funded by a member of the HAI advisory board, has already taken its first step: hiring Embedded EthiCS fellow Kathleen Creel. She will collaborate with computer science faculty to develop ethics modules that will be integrated into core undergraduate computer science courses during the next two years. 

Creel, who says she feels as if she’s been training for this job her whole life, double majored in computer science and philosophy as an undergraduate before working in tech and then getting her PhD in the history and philosophy of science. 

“Studying computer science changed the way I think about everything,” Creel says. She remembers being delighted by the way her mindset shifted as she learned how to formulate problems, define variables, and create optimization algorithms. She also realized (with help from her philosophy coursework) that each of those steps raised ethical questions. For example: For whom is this a problem? Who benefits from the solution to this problem? How does the formulation of this problem have ethical consequences? What am I trying to optimize?

“One of the hopes behind the Embedded EthiCS curriculum is that as you’re learning this whole computational mindset that will change your life and the way you think about everything, you’ll also practice, throughout the whole curriculum, building ethical thinking into that mindset.”

‘Spaces to Think’

The Embedded EthiCS modules created by Creel and her collaborators will be deployed in one class during the fall quarter of 2020, and two classes in each of the Winter and Spring quarters of 2021. Each module will include at least one lecture and one assignment that grapples with ethical issues relevant to the course. But Creel says she and her collaborators are also working on ways to more deeply embed the modules – so that they aren’t just stand-alone days. 

Topics covered will vary depending on the course, but will include fairness and bias in machine learning algorithms, the manipulation of digital images, and other issues of interpersonal ethics in technology, such as how a self-driving car should behave in order to preserve human life or minimize suffering. Creel says modules will also address how technology should function in a democratic society, as well as “meta-ethical” issues such as how a person might balance duties as a software engineer for a particular company with duties as a moral agent more generally. “Students often want very much to do the right thing and want opportunities and spaces to think about how to do it,” Creel says. 

The goal, says Anne Newman, research director at the McCoy Family Center for Ethics in Society, is “for students to gain the skills to be good reasoners about ethical dilemmas, and to understand what the competing values are – that there are value tensions and how to muddle through those.”

As Reich sees it, “We want the pipeline of first-rate computer scientists coming out of Stanford to have a full complement of ethical frameworks to accompany their technical prowess.” At the same time, he hopes that the many students at Stanford who take intro computer science courses but don’t major in the field will also benefit from understanding the ethical, social, and political implications of technology – whether as informed citizens, consumers, policy experts, researchers, or civil society leaders. “We won’t create overnight a new landscape for the governance or regulation of technology or professional ethics for computer scientists or technologists, but rather by educating the next generation,” he says. 

Stanford HAI's mission is to advance AI research, education, policy and practice to improve the human condition. Learn more. 

Share
Link copied to clipboard!
Contributor(s)
Katharine Miller
Related
  • David Magnus: How will artificial intelligence impact medical ethics?
    the ​Stanford Engineering Staff
    Feb 04
    news
    Your browser does not support the video tag.

    In recent years, the explosion of artificial intelligence in medicine has yielded an increase in hope for patient outcomes, balanced by an equal concern for ethical implications.​

Related News

Assessing the Role of Intelligent Tutors in K-12 Education
Nikki Goth Itoi
Apr 21, 2025
News

Scholars discover short-horizon data from edtech platforms can help predict student performance in the long term.

News

Assessing the Role of Intelligent Tutors in K-12 Education

Nikki Goth Itoi
Education, SkillsGenerative AIApr 21

Scholars discover short-horizon data from edtech platforms can help predict student performance in the long term.

Language Models in the Classroom: Bridging the Gap Between Technology and Teaching
Instructors and students of CS293
Apr 09, 2025
News

Instructors and students from Stanford class CS293/EDUC473 address the failures of current educational technologies and outline how to empower both teachers and learners through collaborative innovation.

News

Language Models in the Classroom: Bridging the Gap Between Technology and Teaching

Instructors and students of CS293
Education, SkillsGenerative AINatural Language ProcessingApr 09

Instructors and students from Stanford class CS293/EDUC473 address the failures of current educational technologies and outline how to empower both teachers and learners through collaborative innovation.

Test Your AI Knowledge: 2025 AI Index Quiz
Shana Lynch
Apr 07, 2025
News

The new AI Index is out. See how well you know the state of the industry.

News

Test Your AI Knowledge: 2025 AI Index Quiz

Shana Lynch
Economy, MarketsEducation, SkillsFoundation ModelsGenerative AIRegulation, Policy, GovernanceApr 07

The new AI Index is out. See how well you know the state of the industry.