Biography: Vinay Prabhu is currently the Chief Scientist at UnifyID Inc, where he leads efforts towards architecting and deploying the state-of-the-art passive mobile biometrics solution by bringing together machine learning algorithms and smart-sensor data to model the human behind the device. Prior to his work at UnifyID Inc, he was a Data Scientist at Albeado. He received his Ph.D. in Electrical and Computer Engineering from Carnegie Mellon University.
Abstract: The thawing of the AI winter and the subsequent deep learning revolution has been marked by large scale open-source-driven democratization efforts and a paper publishing frenzy. As we navigate through this massive corpus of technical literature, four categories of ethical transgressions come to fore: Dataset curation, Modeling, Problem definitions and sycophantic tech-journalism. In this talk, we will explore specific examples in each of these categories with a strong focus on computer vision. The goal of this talk is to not just demonstrate the widespread usage of these datasets and models, but to also elicit a commitment from the attending scholars to either not use these datasets or models, or to insert an ethical caveat in case of unavoidable usage.
Biography: Vinay Prabhu is currently the Chief Scientist at UnifyID Inc, where he leads efforts towards architecting and deploying the state-of-the-art passive mobile biometrics solution by bringing together machine learning algorithms and smart-sensor data to model the human behind the device. Prior to his work at UnifyID Inc, he was a Data Scientist at Albeado. He received his Ph.D. in Electrical and Computer Engineering from Carnegie Mellon University.
Abstract: The thawing of the AI winter and the subsequent deep learning revolution has been marked by large scale open-source-driven democratization efforts and a paper publishing frenzy. As we navigate through this massive corpus of technical literature, four categories of ethical transgressions come to fore: Dataset curation, Modeling, Problem definitions and sycophantic tech-journalism. In this talk, we will explore specific examples in each of these categories with a strong focus on computer vision. The goal of this talk is to not just demonstrate the widespread usage of these datasets and models, but to also elicit a commitment from the attending scholars to either not use these datasets or models, or to insert an ethical caveat in case of unavoidable usage.
Biography: David Robinson is a Visiting Scientist at the AI Policy and Practice Initiative at Cornell University's College of Computing and Information Science. He is also co-founder and Managing Director of Upturn, a nonprofit that advances equity and justice in the design, governance, and use of digital technology, and a co-director of the MacArthur Foundation's Pretrial Risk Management Project. His research spans law, policy and computer science. While working at Upturn, he designed and taught a Georgetown Law seminar course on Governing Automated Decisions. David served as the inaugural Associate Director of Princeton University's Center for Information Technology Policy. He holds a JD from Yale and studied philosophy at Princeton and Oxford, where he was a Rhodes scholar.
Abstract: On December 4, 2014, the algorithm that allocates kidneys for transplant in the United States was replaced, following more than a decade of debate and planning. This process embodied many of the strategies now being proposed and debated in the largely theoretical scholarly literature on algorithmic governance (and in a growing number of legislative and policy contexts), offering a rare chance to see such tools in action. The kidney allocation algorithm has long been governed by a collaborative multistakeholder process; its logic and detailed data about its operations are public and widely scrutinized; the design process carefully assesses a complex blend of medical, moral and logistical factors; and independent experts simulate possible changes and analyze system performance. In short, a suite of careful governance practices for an algorithm operate in concert. In this talk, I reconstruct the story of the allocation algorithm’s governance and of its bitterly contested redesign, and ask what we might learn from it. I find that kidney allocation provides both an encouraging precedent and a cautionary tale for recently proposed governance strategies for algorithms. First, stakeholder input mechanisms can indeed be valuable, but they are critically constrained by existing legal and political authorities. Second, transparency benefits experts most, and official disclosures are no substitute for firsthand knowledge of how a system works. Third, the design of an algorithm allocates attention, bringing some normative questions into clear focus while obscuring others. Fourth and finally, a public infrastructure pof analysis and evaluation is powerfully helpful for informed governance.
Biography: David Robinson is a Visiting Scientist at the AI Policy and Practice Initiative at Cornell University's College of Computing and Information Science. He is also co-founder and Managing Director of Upturn, a nonprofit that advances equity and justice in the design, governance, and use of digital technology, and a co-director of the MacArthur Foundation's Pretrial Risk Management Project. His research spans law, policy and computer science. While working at Upturn, he designed and taught a Georgetown Law seminar course on Governing Automated Decisions. David served as the inaugural Associate Director of Princeton University's Center for Information Technology Policy. He holds a JD from Yale and studied philosophy at Princeton and Oxford, where he was a Rhodes scholar.
Abstract: On December 4, 2014, the algorithm that allocates kidneys for transplant in the United States was replaced, following more than a decade of debate and planning. This process embodied many of the strategies now being proposed and debated in the largely theoretical scholarly literature on algorithmic governance (and in a growing number of legislative and policy contexts), offering a rare chance to see such tools in action. The kidney allocation algorithm has long been governed by a collaborative multistakeholder process; its logic and detailed data about its operations are public and widely scrutinized; the design process carefully assesses a complex blend of medical, moral and logistical factors; and independent experts simulate possible changes and analyze system performance. In short, a suite of careful governance practices for an algorithm operate in concert. In this talk, I reconstruct the story of the allocation algorithm’s governance and of its bitterly contested redesign, and ask what we might learn from it. I find that kidney allocation provides both an encouraging precedent and a cautionary tale for recently proposed governance strategies for algorithms. First, stakeholder input mechanisms can indeed be valuable, but they are critically constrained by existing legal and political authorities. Second, transparency benefits experts most, and official disclosures are no substitute for firsthand knowledge of how a system works. Third, the design of an algorithm allocates attention, bringing some normative questions into clear focus while obscuring others. Fourth and finally, a public infrastructure pof analysis and evaluation is powerfully helpful for informed governance.
Bio: John Markoff is HAI’s Journalist-in-Residence. He is also a research affiliate at the Center for Advanced Study in the Behavioral Sciences or CASBS, participating in projects focusing on the future of work and artificial intelligence. He is currently researching a biography of Stewart Brand, the creator of the Whole Earth Catalog. Previously he was a Berggruen Fellow at CASBS. He has also been a staff historian at the Computer History Museum in Mountain View, Calif. Until 2017, he was a reporter at The New York Times, beginning in March 1988 as the paper’s national computer writer. Prior to joining the Times, he worked for the San Francisco Examiner. He has written about technology for Pacific News Service. He was a reporter at Infoworld and West Coast editor for Byte Magazine and wrote a column on personal computers for the San Jose Mercury. He has also been a lecturer at the University of California at Berkeley School of Journalism and an adjunct faculty member of the Stanford Graduate Program on Journalism. In 2013 he was awarded a Pulitzer Prize in explanatory reporting as part of a New York Times project on labor and automation. In 2007, he was named a fellow of the Society of Professional Journalists, the organization’s highest honor. In June of 2010, the New York Times presented him with the Nathaniel Nash Award, which is given annually for foreign and business reporting. He is the co-author of The High Cost of High Tech, published by Harper & Row. He co-wrote Cyberpunk: Outlaws and Hackers on the Computer Frontier published Simon & Schuster. Hyperion published Takedown: The Pursuit and Capture of America's Most Wanted Computer Outlaw, which he co-authored with Tsutomu Shimomura. What the Dormouse Said: How the Sixties Counterculture Shaped the Personal Computer Industry, was published by Viking Books. Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots, was published by HarperCollins Ecco. Markoff grew up in Palo Alto, California, and graduated from Whitman College, Walla Walla, Washington. He attended graduate school at the University of Oregon and received a masters degree in sociology.
Bio: John Markoff is HAI’s Journalist-in-Residence. He is also a research affiliate at the Center for Advanced Study in the Behavioral Sciences or CASBS, participating in projects focusing on the future of work and artificial intelligence. He is currently researching a biography of Stewart Brand, the creator of the Whole Earth Catalog. Previously he was a Berggruen Fellow at CASBS. He has also been a staff historian at the Computer History Museum in Mountain View, Calif. Until 2017, he was a reporter at The New York Times, beginning in March 1988 as the paper’s national computer writer. Prior to joining the Times, he worked for the San Francisco Examiner. He has written about technology for Pacific News Service. He was a reporter at Infoworld and West Coast editor for Byte Magazine and wrote a column on personal computers for the San Jose Mercury. He has also been a lecturer at the University of California at Berkeley School of Journalism and an adjunct faculty member of the Stanford Graduate Program on Journalism. In 2013 he was awarded a Pulitzer Prize in explanatory reporting as part of a New York Times project on labor and automation. In 2007, he was named a fellow of the Society of Professional Journalists, the organization’s highest honor. In June of 2010, the New York Times presented him with the Nathaniel Nash Award, which is given annually for foreign and business reporting. He is the co-author of The High Cost of High Tech, published by Harper & Row. He co-wrote Cyberpunk: Outlaws and Hackers on the Computer Frontier published Simon & Schuster. Hyperion published Takedown: The Pursuit and Capture of America's Most Wanted Computer Outlaw, which he co-authored with Tsutomu Shimomura. What the Dormouse Said: How the Sixties Counterculture Shaped the Personal Computer Industry, was published by Viking Books. Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots, was published by HarperCollins Ecco. Markoff grew up in Palo Alto, California, and graduated from Whitman College, Walla Walla, Washington. He attended graduate school at the University of Oregon and received a masters degree in sociology.
Bio: Marietje Schaake is an International Policy Fellow at the Stanford Institute for Human-Centered Artificial Intelligence (HAI) and the International Policy Director of the Cyber Policy Center, where she conducts policy-relevant research focused on cyber policy recommendations for industry and government. In addition to her own research, she represents the center to governments, NGOs, and the technology industry. Schaake also teaches courses on cyber policy from an international perspective, and brings to Stanford leaders from around the world to discuss cyber policy. Prior to joining Stanford, Marietje Schaake led an active career in politics and civic service. She was a representative of the Dutch Democratic Party and the Alliance of Liberals and Democrats for Europe (ALDE) in European Parliament where she was first elected in 2009. In European Parliament, Schaake focused on trade, foreign policy and technology, and as a member of the Global Commission on the Stability of Cyberspace, and founder of the European Parliament Intergroup on the European Digital Agenda, Schaake develops solutions to strengthen the rule of law online, including initiating the net neutrality law now in effect throughout Europe.
Bio: Marietje Schaake is an International Policy Fellow at the Stanford Institute for Human-Centered Artificial Intelligence (HAI) and the International Policy Director of the Cyber Policy Center, where she conducts policy-relevant research focused on cyber policy recommendations for industry and government. In addition to her own research, she represents the center to governments, NGOs, and the technology industry. Schaake also teaches courses on cyber policy from an international perspective, and brings to Stanford leaders from around the world to discuss cyber policy. Prior to joining Stanford, Marietje Schaake led an active career in politics and civic service. She was a representative of the Dutch Democratic Party and the Alliance of Liberals and Democrats for Europe (ALDE) in European Parliament where she was first elected in 2009. In European Parliament, Schaake focused on trade, foreign policy and technology, and as a member of the Global Commission on the Stability of Cyberspace, and founder of the European Parliament Intergroup on the European Digital Agenda, Schaake develops solutions to strengthen the rule of law online, including initiating the net neutrality law now in effect throughout Europe.
Abstract: New developments in Artificial Intelligence, particularly deep learning and other forms of “second-wave” AI, are attracting enormous public attention. Both triumphalists and doomsayers are predicting that human-level AI may be “just around the corner.” To assess whether that prediction is true, we need a broad understanding of intelligence, in terms of which to assess: (i) what kinds of intelligence machines currently have, and will likely have in the future; and (ii) what kinds of intelligence people currently have, and may be capable of in the future. As the first step in this direction, I distinguish two kinds of intelligence: (i) “reckoning,” the kind of calculative rationality that computers excel at, including both first- and second-wave AI; and (ii) “judgment,” a form of dispassionate, deliberative thought, grounded in ethical commitment and responsible action, that is appropriate to the situation in which it is deployed. AI will develop world-changing reckoning systems, I argue, but nothing in AI as currently conceived approaches what is required to build a system capable of judgment.
Bio: Brian Cantwell Smith is Reid Hoffman Professor of Artificial Intelligence and the Human at the University of Toronto, where he is also Professor of Information, Philosophy, Cognitive Science, and the History and Philosophy of Science and Technology, as well as being a Senior Fellow at Massey College. Smith’s research focuses on the philosophical foundations of computation, artificial intelligence, and mind, and on fundamental issues in metaphysics and epistemology. In the 1980s he developed the world’s first reflective programming language (3Lisp). He is the author of *On the Origin of Objects* (MIT Press, 1996), and of *On the Promise of Artificial Intelligence: Reckoning and Judgment* (MIT Press, 2019).
Abstract: New developments in Artificial Intelligence, particularly deep learning and other forms of “second-wave” AI, are attracting enormous public attention. Both triumphalists and doomsayers are predicting that human-level AI may be “just around the corner.” To assess whether that prediction is true, we need a broad understanding of intelligence, in terms of which to assess: (i) what kinds of intelligence machines currently have, and will likely have in the future; and (ii) what kinds of intelligence people currently have, and may be capable of in the future. As the first step in this direction, I distinguish two kinds of intelligence: (i) “reckoning,” the kind of calculative rationality that computers excel at, including both first- and second-wave AI; and (ii) “judgment,” a form of dispassionate, deliberative thought, grounded in ethical commitment and responsible action, that is appropriate to the situation in which it is deployed. AI will develop world-changing reckoning systems, I argue, but nothing in AI as currently conceived approaches what is required to build a system capable of judgment.
Bio: Brian Cantwell Smith is Reid Hoffman Professor of Artificial Intelligence and the Human at the University of Toronto, where he is also Professor of Information, Philosophy, Cognitive Science, and the History and Philosophy of Science and Technology, as well as being a Senior Fellow at Massey College. Smith’s research focuses on the philosophical foundations of computation, artificial intelligence, and mind, and on fundamental issues in metaphysics and epistemology. In the 1980s he developed the world’s first reflective programming language (3Lisp). He is the author of *On the Origin of Objects* (MIT Press, 1996), and of *On the Promise of Artificial Intelligence: Reckoning and Judgment* (MIT Press, 2019).
Rahul Panicker, Chief Innovation Officer, Wadhwani AI
Abstract: From our experience developing and deploying AI-for-social-good solutions to help healthcare workers in villages of developing countries weigh newborns using just a smartphone, cotton farmers fight pest attacks, and tuberculosis-control programs find and support TB patients, and advising organizations like the WHO, UN ITU, and governments on AI, I will share some lessons learned, and opportunities for AI to have large-scale impact across domains like health, agriculture, education, and financial inclusion. Such impact will require novel approaches across algorithms, human factors, regulatory frameworks, and systems thinking. AI-for-social-good also offers a rich source of problems for AI, spanning computer vision, weakly-supervised learning, causal reasoning, domain adaptation, uncertainty calibration, explainability, computing on low-resource devices, and privacy-preserving learning. The Wadhwani Institute for Artificial Intelligence is an independent nonprofit research institute that develops and deploys AI-for-social-good solutions in the developing world. Bio: Dr. Rahul Panicker, as Chief Innovation Officer, heads research at the Wadhwani Institute for Artificial Intelligence. Prior to this, he was co-founder of Embrace, a for-profit social enterprise that has helped over 500,000 babies worldwide through low-cost incubators that work without electricity. He is an MIT TR35 awardee, World Economic Forum Social Entrepreneur of the Year, Industrial Design Society of America Gold winner, and an Echoing Green Fellow. He holds an MS/PhD in EE from Stanford University, is an alumnus of the Stanford d.school, and has a B.Tech from IIT Madras.Rahul Panicker, Chief Innovation Officer, Wadhwani AI
Abstract: From our experience developing and deploying AI-for-social-good solutions to help healthcare workers in villages of developing countries weigh newborns using just a smartphone, cotton farmers fight pest attacks, and tuberculosis-control programs find and support TB patients, and advising organizations like the WHO, UN ITU, and governments on AI, I will share some lessons learned, and opportunities for AI to have large-scale impact across domains like health, agriculture, education, and financial inclusion. Such impact will require novel approaches across algorithms, human factors, regulatory frameworks, and systems thinking. AI-for-social-good also offers a rich source of problems for AI, spanning computer vision, weakly-supervised learning, causal reasoning, domain adaptation, uncertainty calibration, explainability, computing on low-resource devices, and privacy-preserving learning. The Wadhwani Institute for Artificial Intelligence is an independent nonprofit research institute that develops and deploys AI-for-social-good solutions in the developing world. Bio: Dr. Rahul Panicker, as Chief Innovation Officer, heads research at the Wadhwani Institute for Artificial Intelligence. Prior to this, he was co-founder of Embrace, a for-profit social enterprise that has helped over 500,000 babies worldwide through low-cost incubators that work without electricity. He is an MIT TR35 awardee, World Economic Forum Social Entrepreneur of the Year, Industrial Design Society of America Gold winner, and an Echoing Green Fellow. He holds an MS/PhD in EE from Stanford University, is an alumnus of the Stanford d.school, and has a B.Tech from IIT Madras.Abstract: The biggest challenge with the democratization of content is how to make sense of the scale. In the last decade, curation of content has consolidated into the hands of a few of the largest technology companies. Today, that curation takes the form of machine learning — often dubbed algorithms by the media. Thomas helped build and introduce the most controversial algorithms of Instagram: non-chronological feed and personalized recommendations. He will discuss challenges from the perspective of an engineer in the control room as Instagram scaled to serve over a billion people. Thomas will share a few of his thoughts about future directions as we start to form a dialogue about the responsibilities of platforms operating on a global scale.
Bio: Thomas Dimson is the original author of “The Algorithm” — the recommender systems behind Instagram's feed, stories and discovery surfaces. He joined Instagram as one of its first 50 employees in 2013, working for seven years as a principal engineer and eventually an engineering director. In that time, he also invented products such as the stories polling sticker, Hyperlapse, and engineering and was named one of the top ten most creative people in business by Fast Company. Thomas graduated from the University of Waterloo with a bachelor's of mathematics and received his master's in computer science from Stanford with a specialization in artificial intelligence.
Abstract: The biggest challenge with the democratization of content is how to make sense of the scale. In the last decade, curation of content has consolidated into the hands of a few of the largest technology companies. Today, that curation takes the form of machine learning — often dubbed algorithms by the media. Thomas helped build and introduce the most controversial algorithms of Instagram: non-chronological feed and personalized recommendations. He will discuss challenges from the perspective of an engineer in the control room as Instagram scaled to serve over a billion people. Thomas will share a few of his thoughts about future directions as we start to form a dialogue about the responsibilities of platforms operating on a global scale.
Bio: Thomas Dimson is the original author of “The Algorithm” — the recommender systems behind Instagram's feed, stories and discovery surfaces. He joined Instagram as one of its first 50 employees in 2013, working for seven years as a principal engineer and eventually an engineering director. In that time, he also invented products such as the stories polling sticker, Hyperlapse, and engineering and was named one of the top ten most creative people in business by Fast Company. Thomas graduated from the University of Waterloo with a bachelor's of mathematics and received his master's in computer science from Stanford with a specialization in artificial intelligence.
Workshop Leader: Emilie Silva
A one-day interdisciplinary workshop involving Stanford faculty and researchers, a select number of outside academics from other institutions, and a small number of private sector and governmental analysts to focus on the intersection of AI and various aspects of international security. The goal was to identify concrete research agendas and synergies, identify gaps in our understanding, and build a network of scholars and experts to address these challenges.
The past several years have seen startling advances in artificial intelligence and machine learning,
driven in part by advances in deep neural networks.3 AI-enabled machines can now meet or exceed
human abilities in a wide range of tasks, including chess, Jeopardy, Go, poker, object recognition,
and driving in some settings. AI systems are being applied to solve a range of problems in
transportation, finance, stock trading, health care, intelligence analysis, and cybersecurity. Despite
calls from prominent scientists to avoid militarizing AI,4 nation-states are certain to use AI and
machine learning tools for national security purposes.
A technology that has the potential for such sweeping changes across human society should be
evaluated for its potential effects on international stability. Many national security applications of
AI could be beneficial, such as advanced cyber defenses that can identify new malware, automated
computer security tools to find and patch vulnerabilities, or machine learning systems to uncover
suspicious behavior by terrorists. Current AI systems have substantial limitations and
vulnerabilities, however, and a headlong rush into national security applications of artificial
intelligence could pose risks to international stability. Some security related applications of AI
could be destabilizing, and competitive dynamics between nations could lead to harmful
consequences such as a “race to the bottom” on AI safety. Other security related applications of AI
could improve international stability.
CNAS is undertaking a two-year, in-depth, interdisciplinary project to examine how artificial
intelligence will influence international security and stability. It is critical for global stability to
begin to a discussion about ways to mitigate the risks while taking advantage of the benefits of
autonomous systems and artificial intelligence. This project will build a community from three
sectors of academia, business, and the policy world that often do not intersect – AI researchers in
academia and business; international security academic experts; and policy practitioners in the
government, both civilian and military. Through a series of workshops, commissioned papers, and
reports, this project will foster a community of practice and begin laying the foundations for a field
of study on AI and international security. The project will conclude with recommendations to
policymakers for ways to capitalize on the potential stabilizing benefits of artificial intelligence,
while avoiding uses that could undermine stability.
Workshop Leader: Emilie Silva
A one-day interdisciplinary workshop involving Stanford faculty and researchers, a select number of outside academics from other institutions, and a small number of private sector and governmental analysts to focus on the intersection of AI and various aspects of international security. The goal was to identify concrete research agendas and synergies, identify gaps in our understanding, and build a network of scholars and experts to address these challenges.
The past several years have seen startling advances in artificial intelligence and machine learning,
driven in part by advances in deep neural networks.3 AI-enabled machines can now meet or exceed
human abilities in a wide range of tasks, including chess, Jeopardy, Go, poker, object recognition,
and driving in some settings. AI systems are being applied to solve a range of problems in
transportation, finance, stock trading, health care, intelligence analysis, and cybersecurity. Despite
calls from prominent scientists to avoid militarizing AI,4 nation-states are certain to use AI and
machine learning tools for national security purposes.
A technology that has the potential for such sweeping changes across human society should be
evaluated for its potential effects on international stability. Many national security applications of
AI could be beneficial, such as advanced cyber defenses that can identify new malware, automated
computer security tools to find and patch vulnerabilities, or machine learning systems to uncover
suspicious behavior by terrorists. Current AI systems have substantial limitations and
vulnerabilities, however, and a headlong rush into national security applications of artificial
intelligence could pose risks to international stability. Some security related applications of AI
could be destabilizing, and competitive dynamics between nations could lead to harmful
consequences such as a “race to the bottom” on AI safety. Other security related applications of AI
could improve international stability.
CNAS is undertaking a two-year, in-depth, interdisciplinary project to examine how artificial
intelligence will influence international security and stability. It is critical for global stability to
begin to a discussion about ways to mitigate the risks while taking advantage of the benefits of
autonomous systems and artificial intelligence. This project will build a community from three
sectors of academia, business, and the policy world that often do not intersect – AI researchers in
academia and business; international security academic experts; and policy practitioners in the
government, both civilian and military. Through a series of workshops, commissioned papers, and
reports, this project will foster a community of practice and begin laying the foundations for a field
of study on AI and international security. The project will conclude with recommendations to
policymakers for ways to capitalize on the potential stabilizing benefits of artificial intelligence,
while avoiding uses that could undermine stability.
This talk develops the proposal that a central – and neglected – ethical challenge for the field of AI is demystification of the techniques and technologies that constitute it. Demystification goes beyond questions of fairness, accuracy and transparency (although those are certainly relevant), to the problem of how we might set out clearly the prerequisites for the efficacy of AI’s operations. To make more concrete what she means by demystification, Lucy will examine the case of so-called ‘pattern of life’ analysis in the designation of persons and activities identified as posing a threat to the security of the US homeland. ‘Human-centered AI’ takes on a darker meaning in this context, as the human becomes centered in the cross hairs of a system of targeting, whether for assassination or incarceration. Lucy will close with some suggestions for how we might proceed with the project of demystification, beginning with an articulation of the limiting conditions as well as the unprecedented powers of contemporary algorithmic systems.
This talk develops the proposal that a central – and neglected – ethical challenge for the field of AI is demystification of the techniques and technologies that constitute it. Demystification goes beyond questions of fairness, accuracy and transparency (although those are certainly relevant), to the problem of how we might set out clearly the prerequisites for the efficacy of AI’s operations. To make more concrete what she means by demystification, Lucy will examine the case of so-called ‘pattern of life’ analysis in the designation of persons and activities identified as posing a threat to the security of the US homeland. ‘Human-centered AI’ takes on a darker meaning in this context, as the human becomes centered in the cross hairs of a system of targeting, whether for assassination or incarceration. Lucy will close with some suggestions for how we might proceed with the project of demystification, beginning with an articulation of the limiting conditions as well as the unprecedented powers of contemporary algorithmic systems.
Join California Supreme Court Justice Cuéllar, who teaches the popular “Regulating AI” course at Stanford, Dan Ho, Associate Director of HAI and professor at the law school and political science, and Terah Lyons, the Founding Executive Director of the Partnership on AI, for a conversation on the law, regulation, and governance of AI! The three will provide a range of perspectives on the promise, challenges, and directions for AI governance.
Mariano-Florentino Cuéllar is a Justice on the Supreme Court of California, the Herman Phleger Visiting Professor of Law at Stanford University, and a faculty affiliate at the Stanford Center for AI Safety. A Fellow of the Harvard Corporation, he also serves on the boards of the Hewlett Foundation, the American Law Institute, and the Stanford Institute for Human-Centered Artificial Intelligence (HAI), and chairs the boards of the Center for Advanced Study in the Behavioral Sciences and AI Now. He received a J.D. from Yale Law School and a Ph.D. in political science from Stanford University and clerked for Chief Judge Mary M. Schroeder of the U.S. Court of Appeals for the Ninth Circuit.
Daniel Ho is the William Benjamin Scott and Luna M. Scott Professor of Law, Professor of Political Science, and Senior Fellow at the Stanford Institute for Economic Policy Research at Stanford University. He directs the Regulation, Evaluation, and Governance Lab (RegLab) at Stanford, and is a Faculty Fellow at the Center for Advanced Study in the Behavioral Sciences and Associate Director of the Stanford Institute for Human-Centered Artificial Intelligence (HAI). He received his J.D. from Yale Law School and Ph.D. from Harvard University and clerked for Judge Stephen F. Williams on the U.S. Court of Appeals for the District of Columbia Circuit.
Terah joins the Partnership from the Mozilla Foundation, where she was most recently a Technology Policy Fellow. Prior to that she served as Policy Advisor to the U.S. Chief Technology Officer at the White House Office of Science and Technology Policy (OSTP). While at the White House, Terah co-directed the White House Future of Artificial Intelligence Initiative, which engaged industry, the academic and technical community, civil society, and international stakeholders to formulate recommendations for a domestic policy strategy on machine intelligence. As part of this effort, she helped lead a landmark series of reports that explored the benefits and challenges of AI on behalf of the United States Government. Prior to the White House, Terah was a Fellow with the Harvard School of Engineering and Applied Sciences and also previously worked at the Harvard Kennedy School of Government Center for Public Leadership.
Join California Supreme Court Justice Cuéllar, who teaches the popular “Regulating AI” course at Stanford, Dan Ho, Associate Director of HAI and professor at the law school and political science, and Terah Lyons, the Founding Executive Director of the Partnership on AI, for a conversation on the law, regulation, and governance of AI! The three will provide a range of perspectives on the promise, challenges, and directions for AI governance.
Mariano-Florentino Cuéllar is a Justice on the Supreme Court of California, the Herman Phleger Visiting Professor of Law at Stanford University, and a faculty affiliate at the Stanford Center for AI Safety. A Fellow of the Harvard Corporation, he also serves on the boards of the Hewlett Foundation, the American Law Institute, and the Stanford Institute for Human-Centered Artificial Intelligence (HAI), and chairs the boards of the Center for Advanced Study in the Behavioral Sciences and AI Now. He received a J.D. from Yale Law School and a Ph.D. in political science from Stanford University and clerked for Chief Judge Mary M. Schroeder of the U.S. Court of Appeals for the Ninth Circuit.
Daniel Ho is the William Benjamin Scott and Luna M. Scott Professor of Law, Professor of Political Science, and Senior Fellow at the Stanford Institute for Economic Policy Research at Stanford University. He directs the Regulation, Evaluation, and Governance Lab (RegLab) at Stanford, and is a Faculty Fellow at the Center for Advanced Study in the Behavioral Sciences and Associate Director of the Stanford Institute for Human-Centered Artificial Intelligence (HAI). He received his J.D. from Yale Law School and Ph.D. from Harvard University and clerked for Judge Stephen F. Williams on the U.S. Court of Appeals for the District of Columbia Circuit.
Terah joins the Partnership from the Mozilla Foundation, where she was most recently a Technology Policy Fellow. Prior to that she served as Policy Advisor to the U.S. Chief Technology Officer at the White House Office of Science and Technology Policy (OSTP). While at the White House, Terah co-directed the White House Future of Artificial Intelligence Initiative, which engaged industry, the academic and technical community, civil society, and international stakeholders to formulate recommendations for a domestic policy strategy on machine intelligence. As part of this effort, she helped lead a landmark series of reports that explored the benefits and challenges of AI on behalf of the United States Government. Prior to the White House, Terah was a Fellow with the Harvard School of Engineering and Applied Sciences and also previously worked at the Harvard Kennedy School of Government Center for Public Leadership.
AI for Healthcare session will feature Marzyeh Ghassemi who targets “Healthy ML” focusing on creating and applying machine learning to understand and improve health. Improving health requires targeting and evidence. Marzyeh tackles part of this puzzle with machine learning. This session will cover some of the novel technical opportunities for machine learning in health challenges and the important progress to be made with a careful application to domain. She will also walk through the danger of applying methods without a robust understanding of the domain, and potential downstream uses.
Marzyeh Ghassemi, Assistant Professor, Faculties of Computer Science & Medicine, University of Toronto and Vector Institute faculty member holding a Canadian CIFAR AI Chair and Canada Research Chair.
Abstract: Improving health requires targeting and evidence. Marzyeh tackles part of this puzzle with machine learning. This session will cover some of the novel technical opportunities for machine learning in health challenges and the important progress to be made with careful application to domain. She will also walk through the danger of applying methods without a robust understanding of the domain, and potential downstream uses. Bio: Professor Ghassemi has a well-established academic track record in personal research contributions across computer science and clinical venues, including KDD, AAAI, MLHC, JAMIA, JMIR, JMLR, Nature Translational Psychiatry, and Critical Care. She is an active member of the scientific community, on the Board of Women in Machine Learning (WiML), and co-organized the past three NIPS Workshop on Machine Learning for Health (ML4H). She served as a NeurIPS 2019 Workshop Co-Chair, and Board Member of the Machine Learning for Health Unconference. Previously, she was a Visiting Researcher with Alphabet's Verily and a post-doc with Dr. Peter Szolovits at MIT (CV). Marzyeh targets “Healthy ML”, focusing on applying machine learning to understand and improve health. Professor Ghassemi completed her PhD at MIT where her research focused on machine learning in health care. Prior to MIT, she received a Master’s degree in biomedical engineering from Oxford University as a Marshall Scholar and B.S. degrees in computer science and electrical engineering as a Goldwater Scholar at New Mexico State University.AI for Healthcare session will feature Marzyeh Ghassemi who targets “Healthy ML” focusing on creating and applying machine learning to understand and improve health. Improving health requires targeting and evidence. Marzyeh tackles part of this puzzle with machine learning. This session will cover some of the novel technical opportunities for machine learning in health challenges and the important progress to be made with a careful application to domain. She will also walk through the danger of applying methods without a robust understanding of the domain, and potential downstream uses.
Marzyeh Ghassemi, Assistant Professor, Faculties of Computer Science & Medicine, University of Toronto and Vector Institute faculty member holding a Canadian CIFAR AI Chair and Canada Research Chair.
Abstract: Improving health requires targeting and evidence. Marzyeh tackles part of this puzzle with machine learning. This session will cover some of the novel technical opportunities for machine learning in health challenges and the important progress to be made with careful application to domain. She will also walk through the danger of applying methods without a robust understanding of the domain, and potential downstream uses. Bio: Professor Ghassemi has a well-established academic track record in personal research contributions across computer science and clinical venues, including KDD, AAAI, MLHC, JAMIA, JMIR, JMLR, Nature Translational Psychiatry, and Critical Care. She is an active member of the scientific community, on the Board of Women in Machine Learning (WiML), and co-organized the past three NIPS Workshop on Machine Learning for Health (ML4H). She served as a NeurIPS 2019 Workshop Co-Chair, and Board Member of the Machine Learning for Health Unconference. Previously, she was a Visiting Researcher with Alphabet's Verily and a post-doc with Dr. Peter Szolovits at MIT (CV). Marzyeh targets “Healthy ML”, focusing on applying machine learning to understand and improve health. Professor Ghassemi completed her PhD at MIT where her research focused on machine learning in health care. Prior to MIT, she received a Master’s degree in biomedical engineering from Oxford University as a Marshall Scholar and B.S. degrees in computer science and electrical engineering as a Goldwater Scholar at New Mexico State University.Abstract:
Recent advances of artificial intelligence and deep learning have been undoubtedly driven by a large amount of data amassed over the years, helping firms, researchers, and practitioners achieve many amazing feats, most notably in recognition tasks often surpassing human ability in several benchmarks. The yield, however, doesn’t seem equally distributed to all who aspire to repeat the success of others in their respective domains, due to the data themselves. A selected few are running away with the infrastructure and the competence they’ve built over time to collect and process the data, leaving many others behind. For some, it’s a struggle to find ways how to get them in the first place, and for some others it’s about figuring out what to do with them. And while many give their data away without knowing what they get in return, the growing awareness of the issue by the public and the thought leaders is being materialized into new regulations and suggestions on how the data should be governed and shared. In this seminar, Bongjun Ko, an AI Engineering Fellow at Stanford HAI, would like to share his thoughts on the this issue, drawing from the experience as an engineer who’s been trying to overcome the lack of data when building data-driven solutions, and as an individual who’s been providing the “new oil in 21st century”. Some of the open questions he would like to cast include: What can you do to remain competitive without data? Is data really a new oil? How much is a piece of data worth, and can it be measured?
Abstract:
Recent advances of artificial intelligence and deep learning have been undoubtedly driven by a large amount of data amassed over the years, helping firms, researchers, and practitioners achieve many amazing feats, most notably in recognition tasks often surpassing human ability in several benchmarks. The yield, however, doesn’t seem equally distributed to all who aspire to repeat the success of others in their respective domains, due to the data themselves. A selected few are running away with the infrastructure and the competence they’ve built over time to collect and process the data, leaving many others behind. For some, it’s a struggle to find ways how to get them in the first place, and for some others it’s about figuring out what to do with them. And while many give their data away without knowing what they get in return, the growing awareness of the issue by the public and the thought leaders is being materialized into new regulations and suggestions on how the data should be governed and shared. In this seminar, Bongjun Ko, an AI Engineering Fellow at Stanford HAI, would like to share his thoughts on the this issue, drawing from the experience as an engineer who’s been trying to overcome the lack of data when building data-driven solutions, and as an individual who’s been providing the “new oil in 21st century”. Some of the open questions he would like to cast include: What can you do to remain competitive without data? Is data really a new oil? How much is a piece of data worth, and can it be measured?
AI promises to transform how government agencies work. Where will it have the biggest impact? What are some challenges around transparency, privacy, bias, and accountability? This talk will go beyond the headlines and share highlights of a just-completed report on AI in the US Government.
Speakers: David Freeman Engstrom - Professor and Associate Dean for Strategic Initiatives, Stanford Law School David Freeman Engstrom is the Bernard D. Bergreen Faculty Scholar and an Associate Dean at Stanford Law School. He is an elected member of the American Law Institute and a faculty affiliate at the Stanford Institute for Human-Centered AI, CodeX: The Stanford Center for Legal Informatics, and the Regulation, Evaluation, and Governance Lab (RegLab). He received a J.D. from Stanford Law School, an M.Sc. from Oxford University, and a Ph.D. in political science from Yale University and clerked for Chief Judge Diane P. Wood on the U.S. Court of Appeals for the Seventh Circuit. Before joining Stanford's faculty, he practiced law, representing clients before the U.S. Supreme Court and other courts and agencies.Daniel Ho - Professor of Law, Professor of Political Science, Director of the Regulation, Evaluation, and Governance Lab (RegLab) at Stanford University
Daniel Ho is the William Benjamin Scott and Luna M. Scott Professor of Law, Professor of Political Science, and Senior Fellow at the Stanford Institute for Economic Policy Research at Stanford University. Dr. Ho received his J.D. from Yale Law School and Ph.D. from Harvard University and clerked for Judge Stephen F. Williams on the U.S. Court of Appeals, District of Columbia Circuit. He directs the Regulation, Evaluation, and Governance Lab (RegLab) at Stanford, is a Faculty Fellow at the Center for Advanced Study in the Behavioral Sciences, and is an Associate Director of the Stanford Institute for Human-Centered Artificial Intelligence (HAI). Live out of the area? Click here to watch via Livestream.AI promises to transform how government agencies work. Where will it have the biggest impact? What are some challenges around transparency, privacy, bias, and accountability? This talk will go beyond the headlines and share highlights of a just-completed report on AI in the US Government.
Speakers: David Freeman Engstrom - Professor and Associate Dean for Strategic Initiatives, Stanford Law School David Freeman Engstrom is the Bernard D. Bergreen Faculty Scholar and an Associate Dean at Stanford Law School. He is an elected member of the American Law Institute and a faculty affiliate at the Stanford Institute for Human-Centered AI, CodeX: The Stanford Center for Legal Informatics, and the Regulation, Evaluation, and Governance Lab (RegLab). He received a J.D. from Stanford Law School, an M.Sc. from Oxford University, and a Ph.D. in political science from Yale University and clerked for Chief Judge Diane P. Wood on the U.S. Court of Appeals for the Seventh Circuit. Before joining Stanford's faculty, he practiced law, representing clients before the U.S. Supreme Court and other courts and agencies.Daniel Ho - Professor of Law, Professor of Political Science, Director of the Regulation, Evaluation, and Governance Lab (RegLab) at Stanford University
Daniel Ho is the William Benjamin Scott and Luna M. Scott Professor of Law, Professor of Political Science, and Senior Fellow at the Stanford Institute for Economic Policy Research at Stanford University. Dr. Ho received his J.D. from Yale Law School and Ph.D. from Harvard University and clerked for Judge Stephen F. Williams on the U.S. Court of Appeals, District of Columbia Circuit. He directs the Regulation, Evaluation, and Governance Lab (RegLab) at Stanford, is a Faculty Fellow at the Center for Advanced Study in the Behavioral Sciences, and is an Associate Director of the Stanford Institute for Human-Centered Artificial Intelligence (HAI). Live out of the area? Click here to watch via Livestream.How can AI and machine learning be leveraged to mitigate the impact of human activities on earth’s natural systems? Learn about data science tools and strategies being used to safeguard our water supply, feed the worldwide human population, and promote greater biodiversity and global sustainability.
Lucas Joppa, Chief Environmental Officer, Microsoft
As Microsoft’s first Chief Environmental Officer, Dr. Lucas Joppa works to advance the company’s core commitment to sustainability through technology innovation, program development, policy advancement, and global operational excellence. With a background in both environmental science and data science, Lucas is committed to using the power of advanced technology to help transform how society monitors, models, and ultimately manages Earth’s natural resources. Dr. Joppa founded Microsoft’s AI for Earth program in 2017—a five-year, $50 million cross-company effort dedicated to delivering technology-enabled solutions to global environmental challenges. Previously, Lucas was Microsoft’s Chief Environmental Scientist and led research programs in Microsoft Research. He remains an active scientist and one of Microsoft’s foremost AI thought leaders, speaking frequently on issues related to Artificial Intelligence, environmental science, and sustainability. With the extensive publication in leading academic journals, such as Science and Nature, Dr. Joppa is a uniquely accredited voice for sustainability in the tech industry. He holds a PhD in Ecology from Duke University, a BS in Wildlife Ecology from the University of Wisconsin, and is a former Peace Corps volunteer to Malawi. Stefano Ermon, Assistant Professor of Computer Science, Stanford University Dr. Ermon is an Assistant Professor in the Department of Computer Science at Stanford University, where he is affiliated with the Artificial Intelligence Laboratory and is a fellow of the Woods Institute for the Environment. His research is centered on techniques for scalable and accurate inference in graphical models, statistical modeling of data, large-scale combinatorial optimization, and robust decision making under uncertainty, and is motivated by a range of applications, in particular ones in the emerging field of computational sustainability. Dr. Ermon received his PhD from Cornell University. Live out of the area? Click here to watch via Livestream. For more information about this event, please click here.
How can AI and machine learning be leveraged to mitigate the impact of human activities on earth’s natural systems? Learn about data science tools and strategies being used to safeguard our water supply, feed the worldwide human population, and promote greater biodiversity and global sustainability.
Lucas Joppa, Chief Environmental Officer, Microsoft
As Microsoft’s first Chief Environmental Officer, Dr. Lucas Joppa works to advance the company’s core commitment to sustainability through technology innovation, program development, policy advancement, and global operational excellence. With a background in both environmental science and data science, Lucas is committed to using the power of advanced technology to help transform how society monitors, models, and ultimately manages Earth’s natural resources. Dr. Joppa founded Microsoft’s AI for Earth program in 2017—a five-year, $50 million cross-company effort dedicated to delivering technology-enabled solutions to global environmental challenges. Previously, Lucas was Microsoft’s Chief Environmental Scientist and led research programs in Microsoft Research. He remains an active scientist and one of Microsoft’s foremost AI thought leaders, speaking frequently on issues related to Artificial Intelligence, environmental science, and sustainability. With the extensive publication in leading academic journals, such as Science and Nature, Dr. Joppa is a uniquely accredited voice for sustainability in the tech industry. He holds a PhD in Ecology from Duke University, a BS in Wildlife Ecology from the University of Wisconsin, and is a former Peace Corps volunteer to Malawi. Stefano Ermon, Assistant Professor of Computer Science, Stanford University Dr. Ermon is an Assistant Professor in the Department of Computer Science at Stanford University, where he is affiliated with the Artificial Intelligence Laboratory and is a fellow of the Woods Institute for the Environment. His research is centered on techniques for scalable and accurate inference in graphical models, statistical modeling of data, large-scale combinatorial optimization, and robust decision making under uncertainty, and is motivated by a range of applications, in particular ones in the emerging field of computational sustainability. Dr. Ermon received his PhD from Cornell University. Live out of the area? Click here to watch via Livestream. For more information about this event, please click here.
Bias in government automated decision systems, the future of farmwork, digital literacy, algorithms in bail decisions, and more. The 2020 cohort of CCSRE Race and Technology Practitioner Fellows and Digital Civil Society Lab Non-Resident Fellows will present lightning talks followed by a networking lunch. Join us to learn about their work and explore possible collaborations.
Bias in government automated decision systems, the future of farmwork, digital literacy, algorithms in bail decisions, and more. The 2020 cohort of CCSRE Race and Technology Practitioner Fellows and Digital Civil Society Lab Non-Resident Fellows will present lightning talks followed by a networking lunch. Join us to learn about their work and explore possible collaborations.