Technology has brought sweeping changes into our lives and enabled many advances across society. Yet too often, breakthroughs in computer science have unintended social consequences that are not easily undone. What if universities trained students to consider social outcomes from the outset? Embedded Ethics initiatives at Stanford and other institutions seek to do just that, by integrating principles of ethical analysis throughout their undergraduate computing courses.
Earlier this month, the McCoy Family Center for Ethics and Society, the Computer Science Department, and Stanford HAI hosted a one-day Embedded Ethics Conference on the topic of teaching responsible computer science. Attendees came from schools across the U.S. and several other countries to exchange ideas about how to design, support, and implement new programs. The conference agenda featured a welcome by Jennifer Widom, dean of the School of Engineering at Stanford, keynotes from several leading scholars in the field, as well as lively panel discussions ranging from getting administrative buy-in to specific implementation strategies. A series of “lightning talks” included demos of a few programs that are in place at schools today. By all accounts, the conference was an inspiring event that brought thoughtful researchers together and surfaced a few promising new ideas.
“Ethics cannot be just a class on the side. It should be inescapable for students who are studying computer science,” said Mehran Sahami, the James and Ellenor Chesebrough Professor in the Computer Science Department at Stanford and a co-organizer of the event. “We need to give students meaningful ways to grapple with these issues, so they become mindful of the impact of the work they do.”
During and after the conference, attendees expressed appreciation for the opportunity to meet so many like-minded scholars, and they suggested the event served as a catalyst for taking action at their own schools.
A Collaboration between Computer Science and Philosophy
Barbara Grosz, Higgins Research Professor of Natural Sciences in Harvard’s John A. Paulson School of Engineering and a Stanford HAI Distinguished Fellow, kicked off the day with a presentation on the origins, evolutions, and lessons of the Embedded EthiCS program that she co-founded at Harvard. “Siri and Watson drove me to develop an AI course that integrated ethics throughout its syllabus,” she recalled. “I saw that our students were taught to write efficient code, but they were not taught to think about ethics. At the time, I was focused on teaching a new seminar course, not a larger change.”
Grosz had some 60 students apply for 20 spots the first time she taught the course Intelligent Systems: Design and Ethical Challenges, and more than 140 applied the second year. At the end of the semester, students said they wanted CS to offer more courses that integrated ethics.
So she and Alison Simmons, the Samuel H. Wolcott Professor of Philosophy at Harvard, launched the Embedded EthiCS program in early 2017 with four courses and one graduate fellow. By spring 2023, it had evolved to reach 9,500 students through 47 courses with both graduate students and postdocs contributing and philosophers and computer scientists meeting in a weekly “teaching lab” to coordinate the development of new modules.
Grosz explained, the benefits to the graduate student and postdoc fellows in the teaching lab for Harvard’s Embedded EthiCS program have ranged from students adapting their research or shaping entirely new research projects to fellows finding different kinds of job opportunities when they enter the workforce. And it’s a win for faculty, who gain confidence in their understanding and ability to discuss ethics in their teaching of computing. “It was heartwarming to see so many kindred spirits together at the conference. No one school can develop a subprogram on its own. We need to help each other,” Grosz said.
Ethics, Policy, Computing and Data
Mariano-Florentino Cuéllar, president of the Carnegie Endowment for International Peace, delivered a thought-provoking talk about the evolution of debates about ethics and technology over the last 20 years, from privacy and security issues in the early days of the internet to disagreements about facial recognition to questions about today’s generative AI models. Cuéllar is also a former justice of the Supreme Court of California, a visiting scholar at Stanford Law School, and a member of the Stanford HAI Advisory Council. He has had a long-standing interest in the intersection of ethics, policy, computing, and data.
“In the beginning, I was focused on getting people to care — a lot. We were seeing the staggering change in human welfare due to technology, but there’s also been a darker side to the progress,” he said. When the conversation shifted a few years later to deep learning, big data, and what the technology meant for surveillance and privacy, Cuéllar saw dilemmas coming in the legal system around who would be liable for AI systems and their performance. “Now we’re in the era of generative AI, and we’re all part of an A/B testing cycle. The technology is evolving in real time, and we are all subjects. It’s hard to step back and ask what is working well and how would we want this to be done in an ideal world.”
Cuéllar challenged the audience to put aside the writing of principles and focus on specific scenarios, such as how to handle medical records, resolve diplomatic disputes involving technology, or identify pathways to catastrophic risk. He urged the audience to be honest about recognizing the trade-offs that must come with every decision and to avoid intellectual shortcuts. “We have an enormous moment of opportunity with the progress of technology and the current spike of interest among young people,” he said.
Developing Cultural Competence
One of the most pivotal talks of the day put the spotlight on the need for incorporating cultural competence into embedded ethics initiatives. Issues of diversity, equity, and inclusion have long been overlooked in computing disciplines; yet they significantly affect the cultures of university departments and tech organizations, as well as the retention of minoritized students, faculty, and staff. To shed light on this topic, Nicki Washington, Professor of the Practice of Computer Science and Gender, Sexuality, & Feminist Studies at Duke University, spoke about her research and experience teaching identity and cultural competence in computing.
“Universities need to take a ‘yes, and …’ approach of embedding ethics and cultural competence, instead of saying, ‘yes, but … not now, not here, or not me,’ ” Washington said. In 2020, she created the Race, Gender, Class, & Computing course as a space for students to have conversations about identity, including how it impacts and is impacted by computing, and to develop an understanding of why these issues matter. The course begins with an exploration of identity (i.e., race, ethnicity, gender, sexuality, class, and ability), forms of oppression against these identities, social justice movements to eliminate these oppressions, and policies enacted to exclude/include identities. After students have spent time reflecting on identity, the class starts looking at specific technologies — facial recognition, surveillance, fintech, voice recognition, health care algorithms — including who is considered the “default” user and their impact on people from different minoritized groups.
The elective course started with 20 students in fall 2020, and a wait list began almost immediately. Washington said she has taught the course six times to date and increased the class size to 100 to accommodate overwhelming student interest. “No two semesters have looked the same,” she explained. Each student and each class builds on what’s happening in the world at the time.
To scale these efforts beyond the campus at Duke, Washington leads the Alliance for Identity Inclusive Computing Education, which is focused on broadening participation in computing across K-16. She also launched the Cultural Competence in Computing (3C) Fellows Program, a two-year professional development program now accepting applications for its fourth cohort. “People in CS are finally starting to listen to social scientists and understand the impact their work has on society. Technology is not neutral,” she said.
Guidelines for Success
Speakers and panelists agreed on several guidelines for launching successful embedded ethics and social responsibility programs:
- Start small: Find a few faculty, who are prepared, if given assistance from people with ethics expertise, to incorporate ethics material in their courses and then let their positive experiences, as well as student demand, generate growth from there.
- Encourage multidisciplinary collaboration: Invite students and faculty from other departments to get involved. Philosophy, social sciences, biomedical, and other engineering departments all have a role to play. Give colleagues time to learn how to listen to each other and work together.
- Embed the material: Representatives from schools with programs underway suggest it’s best to weave ethics modules into core computing courses, instead of just adding a new separate ethics course. If students simply check a box to complete a requirement, the program is less likely to result in meaningful change.
- Be creative: Get insight from others, and don’t be afraid to experiment and find the best approach for your institution.
- Try to plan ahead: Seek funding and administrative support at the outset, but don’t be discouraged if it doesn’t come all at once. Remember that once a program is off the ground, it must be maintained and evolved over time in order to be perceived as important.
Stanford, Harvard, and other schools have set up repositories of information for others to access and deploy. Stanford’s Embedded Ethics team recently launched a new website, Embedding Ethics in Computer Science, with curricular resources for undergraduate CS courses. In addition, the Responsible Computing Challenge offers a playbook with teaching advice, and the Association for Computing Machinery has established the ACM Code of Ethics and Professional Conduct, with case studies available on its website.
It’s early days for most embedded ethics programs, but those who attended this gathering said they were encouraged to hear from others in the field and to share their visions for this work. “Beyond the one day of meetings, I think we’ve created a community of practice,” Sahami said. “It’s not only about the ideas and best practices, but the invaluable connections to other people.”
Miss the conference? Watch the recording and see a list of resources to assist in designing and implementing embedded ethics programs.
Stanford HAI’s mission is to advance AI research, education, policy and practice to improve the human condition. Learn more.