Abstract: In this talk I hope to illustrate how AI ethics can avoid the undesirable extremes of two dimensions:
First dimension: Complacency vs Inflation
On the one hand, I will argue that we should eschew the complacent view that AI presents no novel challenges for ethics, the view that AI is just a technology like any other, so extant ethical principles for non-AI technologies are all we need. On the other hand, I will also argue that we should also avoid the inflationary view that the need for new ethical principles for AI derives from the fact that AI systems are themselves moral agents and/or patients.
Second dimension: Reactive systems vs Robots with Obligations
On the one hand, I will argue that the ethical construction of autonomous AI systems (including, but not limited to, autonomous robots such as driverless cars) will require that such systems do more than *merely* transform an input signal to an output signal (as is prevalent in much machine learning technology); at least part of that transformation, to have the right counterfactual richness that ethics requires, will have to have deliberative structure. On the other hand, AI systems that reason about their ethical obligations and what is morally permissible for them are not a solution since AI systems will not, for the foreseeable future, be the kinds of things that could have ethical obligations or moral permissions.
For each of these dimensions, I give a specific example (a policy, and a design, respectively) that avoids the horns of the dilemma, and the moral hazards they entail.
Abstract: In this talk I hope to illustrate how AI ethics can avoid the undesirable extremes of two dimensions:
First dimension: Complacency vs Inflation
On the one hand, I will argue that we should eschew the complacent view that AI presents no novel challenges for ethics, the view that AI is just a technology like any other, so extant ethical principles for non-AI technologies are all we need. On the other hand, I will also argue that we should also avoid the inflationary view that the need for new ethical principles for AI derives from the fact that AI systems are themselves moral agents and/or patients.
Second dimension: Reactive systems vs Robots with Obligations
On the one hand, I will argue that the ethical construction of autonomous AI systems (including, but not limited to, autonomous robots such as driverless cars) will require that such systems do more than *merely* transform an input signal to an output signal (as is prevalent in much machine learning technology); at least part of that transformation, to have the right counterfactual richness that ethics requires, will have to have deliberative structure. On the other hand, AI systems that reason about their ethical obligations and what is morally permissible for them are not a solution since AI systems will not, for the foreseeable future, be the kinds of things that could have ethical obligations or moral permissions.
For each of these dimensions, I give a specific example (a policy, and a design, respectively) that avoids the horns of the dilemma, and the moral hazards they entail.
Abstract: Over the past few years, a series of crises and scandals have resulted in pushback against some of the biggest tech companies from employees within their ranks and from a broader public. This pushback has highlighted a deep concern with the ethical implications of data-driven technologies that have been implicated in anti-democratic practices, automated warfare, the spread of mis- and disinformation, and the perpetuation of racial and gender disparities. In the wake of this pushback, Silicon Valley tech companies have recently begun to invest resources in hiring personnel to design, develop, and manage a portfolio of practices that can adequately address these areas of ethical concern. In this talk we will discuss our recent research, in which we interviewed dozens of these "ethics owners" to understand how they understand their emerging job roles, the conceptual and organizational challenges that stand in their way, and the tensions that emerge from developing ethics practices inside and across their companies.
Abstract: Over the past few years, a series of crises and scandals have resulted in pushback against some of the biggest tech companies from employees within their ranks and from a broader public. This pushback has highlighted a deep concern with the ethical implications of data-driven technologies that have been implicated in anti-democratic practices, automated warfare, the spread of mis- and disinformation, and the perpetuation of racial and gender disparities. In the wake of this pushback, Silicon Valley tech companies have recently begun to invest resources in hiring personnel to design, develop, and manage a portfolio of practices that can adequately address these areas of ethical concern. In this talk we will discuss our recent research, in which we interviewed dozens of these "ethics owners" to understand how they understand their emerging job roles, the conceptual and organizational challenges that stand in their way, and the tensions that emerge from developing ethics practices inside and across their companies.
Biography: H. Andrew Schwartz is director of the Human Language Analysis Beings (HLAB) and Assistant Professor of Computer Science at Stony Brook University (SUNY). His interdisciplinary research focuses on developing large-scale language analyses for health and social sciences as well as integrating human factors into predictive models to improve natural language processing for diverse populations. He is an active member of many subcommunities of AI (e.g. NAACL, EMNLP, WWW), Psychology (e.g. Psychological Science, JPSP, PNAS), and health informatics (e.g. JMIR, Nature - Sci Reports, Am Jour. of Public Health). He was previously Lead Research Scientist for the World Well-Being Project at the University of Pennsylvania, an interdisciplinary team studying how big language analyses can reveal and predict differences in health, personality, and well-being. He received his PhD in Computer Science from the University of Central Florida in 2011 with research on acquiring lexical semantic knowledge from the Web.
Abstract: Natural Language Processing (NLP) conventionally focuses on modeling words, phrases, or documents. However, natural language is generated by people and with the growth of social media and automated assistants, NLP is increasingly tackling human problems that are social, psychological, or medical in nature. Language shared on Facebook and Twitter has already been used to measure characteristics from individual depression and personality to community well-being, mortality, and, recently, COVID symptom rates. In this talk I will summarize recent work from the Human Language Analysis Lab to further NLP towards modeling people as the originators of language. This includes controlling for and correcting biases from extralinguistic variables (demographics, socioeconomics), placing language in time (forecasting future outcomes), and leveraging the inherent multi-level structure (people, who belong to communities, generate language). Taken together, I will suggest that considering the people behind the language not only offers opportunities for improved accuracies but it could be fundamental to NLP's role in our increasingly digital world.
Biography: H. Andrew Schwartz is director of the Human Language Analysis Beings (HLAB) and Assistant Professor of Computer Science at Stony Brook University (SUNY). His interdisciplinary research focuses on developing large-scale language analyses for health and social sciences as well as integrating human factors into predictive models to improve natural language processing for diverse populations. He is an active member of many subcommunities of AI (e.g. NAACL, EMNLP, WWW), Psychology (e.g. Psychological Science, JPSP, PNAS), and health informatics (e.g. JMIR, Nature - Sci Reports, Am Jour. of Public Health). He was previously Lead Research Scientist for the World Well-Being Project at the University of Pennsylvania, an interdisciplinary team studying how big language analyses can reveal and predict differences in health, personality, and well-being. He received his PhD in Computer Science from the University of Central Florida in 2011 with research on acquiring lexical semantic knowledge from the Web.
Abstract: Natural Language Processing (NLP) conventionally focuses on modeling words, phrases, or documents. However, natural language is generated by people and with the growth of social media and automated assistants, NLP is increasingly tackling human problems that are social, psychological, or medical in nature. Language shared on Facebook and Twitter has already been used to measure characteristics from individual depression and personality to community well-being, mortality, and, recently, COVID symptom rates. In this talk I will summarize recent work from the Human Language Analysis Lab to further NLP towards modeling people as the originators of language. This includes controlling for and correcting biases from extralinguistic variables (demographics, socioeconomics), placing language in time (forecasting future outcomes), and leveraging the inherent multi-level structure (people, who belong to communities, generate language). Taken together, I will suggest that considering the people behind the language not only offers opportunities for improved accuracies but it could be fundamental to NLP's role in our increasingly digital world.
Biography: Vinay Prabhu is currently the Chief Scientist at UnifyID Inc, where he leads efforts towards architecting and deploying the state-of-the-art passive mobile biometrics solution by bringing together machine learning algorithms and smart-sensor data to model the human behind the device. Prior to his work at UnifyID Inc, he was a Data Scientist at Albeado. He received his Ph.D. in Electrical and Computer Engineering from Carnegie Mellon University.
Abstract: The thawing of the AI winter and the subsequent deep learning revolution has been marked by large scale open-source-driven democratization efforts and a paper publishing frenzy. As we navigate through this massive corpus of technical literature, four categories of ethical transgressions come to fore: Dataset curation, Modeling, Problem definitions and sycophantic tech-journalism. In this talk, we will explore specific examples in each of these categories with a strong focus on computer vision. The goal of this talk is to not just demonstrate the widespread usage of these datasets and models, but to also elicit a commitment from the attending scholars to either not use these datasets or models, or to insert an ethical caveat in case of unavoidable usage.
Biography: Vinay Prabhu is currently the Chief Scientist at UnifyID Inc, where he leads efforts towards architecting and deploying the state-of-the-art passive mobile biometrics solution by bringing together machine learning algorithms and smart-sensor data to model the human behind the device. Prior to his work at UnifyID Inc, he was a Data Scientist at Albeado. He received his Ph.D. in Electrical and Computer Engineering from Carnegie Mellon University.
Abstract: The thawing of the AI winter and the subsequent deep learning revolution has been marked by large scale open-source-driven democratization efforts and a paper publishing frenzy. As we navigate through this massive corpus of technical literature, four categories of ethical transgressions come to fore: Dataset curation, Modeling, Problem definitions and sycophantic tech-journalism. In this talk, we will explore specific examples in each of these categories with a strong focus on computer vision. The goal of this talk is to not just demonstrate the widespread usage of these datasets and models, but to also elicit a commitment from the attending scholars to either not use these datasets or models, or to insert an ethical caveat in case of unavoidable usage.
Biography: David Robinson is a Visiting Scientist at the AI Policy and Practice Initiative at Cornell University's College of Computing and Information Science. He is also co-founder and Managing Director of Upturn, a nonprofit that advances equity and justice in the design, governance, and use of digital technology, and a co-director of the MacArthur Foundation's Pretrial Risk Management Project. His research spans law, policy and computer science. While working at Upturn, he designed and taught a Georgetown Law seminar course on Governing Automated Decisions. David served as the inaugural Associate Director of Princeton University's Center for Information Technology Policy. He holds a JD from Yale and studied philosophy at Princeton and Oxford, where he was a Rhodes scholar.
Abstract: On December 4, 2014, the algorithm that allocates kidneys for transplant in the United States was replaced, following more than a decade of debate and planning. This process embodied many of the strategies now being proposed and debated in the largely theoretical scholarly literature on algorithmic governance (and in a growing number of legislative and policy contexts), offering a rare chance to see such tools in action. The kidney allocation algorithm has long been governed by a collaborative multistakeholder process; its logic and detailed data about its operations are public and widely scrutinized; the design process carefully assesses a complex blend of medical, moral and logistical factors; and independent experts simulate possible changes and analyze system performance. In short, a suite of careful governance practices for an algorithm operate in concert. In this talk, I reconstruct the story of the allocation algorithm’s governance and of its bitterly contested redesign, and ask what we might learn from it. I find that kidney allocation provides both an encouraging precedent and a cautionary tale for recently proposed governance strategies for algorithms. First, stakeholder input mechanisms can indeed be valuable, but they are critically constrained by existing legal and political authorities. Second, transparency benefits experts most, and official disclosures are no substitute for firsthand knowledge of how a system works. Third, the design of an algorithm allocates attention, bringing some normative questions into clear focus while obscuring others. Fourth and finally, a public infrastructure pof analysis and evaluation is powerfully helpful for informed governance.
Biography: David Robinson is a Visiting Scientist at the AI Policy and Practice Initiative at Cornell University's College of Computing and Information Science. He is also co-founder and Managing Director of Upturn, a nonprofit that advances equity and justice in the design, governance, and use of digital technology, and a co-director of the MacArthur Foundation's Pretrial Risk Management Project. His research spans law, policy and computer science. While working at Upturn, he designed and taught a Georgetown Law seminar course on Governing Automated Decisions. David served as the inaugural Associate Director of Princeton University's Center for Information Technology Policy. He holds a JD from Yale and studied philosophy at Princeton and Oxford, where he was a Rhodes scholar.
Abstract: On December 4, 2014, the algorithm that allocates kidneys for transplant in the United States was replaced, following more than a decade of debate and planning. This process embodied many of the strategies now being proposed and debated in the largely theoretical scholarly literature on algorithmic governance (and in a growing number of legislative and policy contexts), offering a rare chance to see such tools in action. The kidney allocation algorithm has long been governed by a collaborative multistakeholder process; its logic and detailed data about its operations are public and widely scrutinized; the design process carefully assesses a complex blend of medical, moral and logistical factors; and independent experts simulate possible changes and analyze system performance. In short, a suite of careful governance practices for an algorithm operate in concert. In this talk, I reconstruct the story of the allocation algorithm’s governance and of its bitterly contested redesign, and ask what we might learn from it. I find that kidney allocation provides both an encouraging precedent and a cautionary tale for recently proposed governance strategies for algorithms. First, stakeholder input mechanisms can indeed be valuable, but they are critically constrained by existing legal and political authorities. Second, transparency benefits experts most, and official disclosures are no substitute for firsthand knowledge of how a system works. Third, the design of an algorithm allocates attention, bringing some normative questions into clear focus while obscuring others. Fourth and finally, a public infrastructure pof analysis and evaluation is powerfully helpful for informed governance.