Stanford
University
  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Privacy
  • Copyright
  • Trademarks
  • Non-Discrimination
  • Accessibility
© Stanford University.  Stanford, California 94305.
Upcoming Events | Stanford HAI
Skip to content
  • About

    • About
    • People
    • Get Involved with HAI
    • Support HAI
    • Subscribe to Email
  • Research

    • Research
    • Fellowship Programs
    • Grants
    • Student Affinity Groups
    • Centers & Labs
    • Research Publications
    • Research Partners
  • Education

    • Education
    • Executive and Professional Education
    • Government and Policymakers
    • K-12
    • Stanford Students
  • Policy

    • Policy
    • Policy Publications
    • Policymaker Education
    • Student Opportunities
  • AI Index

    • AI Index
    • AI Index Report
    • Global Vibrancy Tool
    • People
  • News
  • Events
  • Industry
  • Centers & Labs
Navigate
  • About
  • Events
  • Careers
  • Search
Participate
  • Get Involved
  • Support HAI
  • Contact Us

Stay Up To Date

Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.

Sign Up For Latest News

Back to Upcoming Events

Previous Events at HAI

AllConferenceHAI SeminarsWorkshops
HAI Virtual Community - The Impact of Robots on Staffing in Nursing Homes
May 14, 20204:00 PM - 5:00 PM
May
14
2020

Most studies of automation focus on manufacturing or use aggregate data. In one of the first studies of the service sector using establishment-level data, we examine the impact of robot adoption on staffing in nursing homes.

HAI Virtual Community - The Impact of Robots on Staffing in Nursing Homes

May 14, 20204:00 PM - 5:00 PM

Most studies of automation focus on manufacturing or use aggregate data. In one of the first studies of the service sector using establishment-level data, we examine the impact of robot adoption on staffing in nursing homes.

Robotics
Healthcare
HAI Weekly Seminar with Ron Chrisley - Against Ethical Robots
SeminarMay 08, 202011:00 AM - 12:00 PM
May
08
2020

HAI Weekly Seminar with Ron Chrisley - Against Ethical Robots

May 08, 202011:00 AM - 12:00 PM
Ethics, Equity, Inclusion
Robotics
Human Reasoning
HAI Weekly Seminar with Emanuel Moss and Jacob Metcalf - Owning Ethics: Organizational Responsibility and the Institutionalization of Ethics in Silicon Valley
SeminarMay 01, 202011:00 AM - 12:00 PM
May
01
2020

HAI Weekly Seminar with Emanuel Moss and Jacob Metcalf - Owning Ethics: Organizational Responsibility and the Institutionalization of Ethics in Silicon Valley

May 01, 202011:00 AM - 12:00 PM
Ethics, Equity, Inclusion
HAI Weekly Seminar with Andrew Schwartz - Modeling the People Behind the Language: Human-Centered Natural Language Processing
SeminarApr 24, 202011:00 AM - 12:00 PM
April
24
2020

Natural Language Processing (NLP) conventionally focuses on modeling words, phrases, or documents. However, natural language is generated by people and with the growth of social media and automated assistants, NLP is increasingly tackling human problems that are social, psychological, or medical in nature.

HAI Weekly Seminar with Andrew Schwartz - Modeling the People Behind the Language: Human-Centered Natural Language Processing

Apr 24, 202011:00 AM - 12:00 PM

Natural Language Processing (NLP) conventionally focuses on modeling words, phrases, or documents. However, natural language is generated by people and with the growth of social media and automated assistants, NLP is increasingly tackling human problems that are social, psychological, or medical in nature.

HAI Weekly Seminar with Vinay Uday Prabhu - On the four horsemen of ethical malice in peer reviewed machine learning literature
SeminarApr 17, 202011:00 AM - 12:00 PM
April
17
2020

HAI Weekly Seminar with Vinay Uday Prabhu - On the four horsemen of ethical malice in peer reviewed machine learning literature

Apr 17, 202011:00 AM - 12:00 PM
Ethics, Equity, Inclusion
HAI Weekly Seminar with David Robinson - Governing an Algorithm in the Wild
SeminarApr 10, 202011:00 AM - 12:00 PM
April
10
2020

HAI Weekly Seminar with David Robinson - Governing an Algorithm in the Wild

Apr 10, 202011:00 AM - 12:00 PM
COVID-19 and AI: A Virtual Conference
ConferenceApr 01, 20209:00 AM - 4:00 PM
April
01
2020

COVID-19 and AI: A Virtual Conference

Apr 01, 20209:00 AM - 4:00 PM
HAI Weekly Seminar with John Markoff - Second Thoughts on Digital Utopianism
SeminarMar 27, 202011:00 AM - 12:00 PM
March
27
2020

Bio: John Markoff is HAI’s Journalist-in-Residence. He is also a research affiliate at the Center for Advanced Study in the Behavioral Sciences or CASBS, participating in projects focusing on the future of work and artificial intelligence. He is currently researching a biography of Stewart Brand, the creator of the Whole Earth Catalog. Previously he was a Berggruen Fellow at CASBS. He has also been a staff historian at the Computer History Museum in Mountain View, Calif. Until 2017, he was a reporter at The New York Times, beginning in March 1988 as the paper’s national computer writer. Prior to joining the Times, he worked for the San Francisco Examiner. He has written about technology for Pacific News Service. He was a reporter at Infoworld and West Coast editor for Byte Magazine and wrote a column on personal computers for the San Jose Mercury. He has also been a lecturer at the University of California at Berkeley School of Journalism and an adjunct faculty member of the Stanford Graduate Program on Journalism. In 2013 he was awarded a Pulitzer Prize in explanatory reporting as part of a New York Times project on labor and automation. In 2007, he was named a fellow of the Society of Professional Journalists, the organization’s highest honor. In June of 2010, the New York Times presented him with the Nathaniel Nash Award, which is given annually for foreign and business reporting. He is the co-author of The High Cost of High Tech, published by Harper & Row. He co-wrote Cyberpunk: Outlaws and Hackers on the Computer Frontier published Simon & Schuster. Hyperion published Takedown: The Pursuit and Capture of America's Most Wanted Computer Outlaw, which he co-authored with Tsutomu Shimomura. What the Dormouse Said: How the Sixties Counterculture Shaped the Personal Computer Industry, was published by Viking Books. Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots, was published by HarperCollins Ecco. Markoff grew up in Palo Alto, California, and graduated from Whitman College, Walla Walla, Washington. He attended graduate school at the University of Oregon and received a masters degree in sociology.

HAI Weekly Seminar with John Markoff - Second Thoughts on Digital Utopianism

Mar 27, 202011:00 AM - 12:00 PM

Bio: John Markoff is HAI’s Journalist-in-Residence. He is also a research affiliate at the Center for Advanced Study in the Behavioral Sciences or CASBS, participating in projects focusing on the future of work and artificial intelligence. He is currently researching a biography of Stewart Brand, the creator of the Whole Earth Catalog. Previously he was a Berggruen Fellow at CASBS. He has also been a staff historian at the Computer History Museum in Mountain View, Calif. Until 2017, he was a reporter at The New York Times, beginning in March 1988 as the paper’s national computer writer. Prior to joining the Times, he worked for the San Francisco Examiner. He has written about technology for Pacific News Service. He was a reporter at Infoworld and West Coast editor for Byte Magazine and wrote a column on personal computers for the San Jose Mercury. He has also been a lecturer at the University of California at Berkeley School of Journalism and an adjunct faculty member of the Stanford Graduate Program on Journalism. In 2013 he was awarded a Pulitzer Prize in explanatory reporting as part of a New York Times project on labor and automation. In 2007, he was named a fellow of the Society of Professional Journalists, the organization’s highest honor. In June of 2010, the New York Times presented him with the Nathaniel Nash Award, which is given annually for foreign and business reporting. He is the co-author of The High Cost of High Tech, published by Harper & Row. He co-wrote Cyberpunk: Outlaws and Hackers on the Computer Frontier published Simon & Schuster. Hyperion published Takedown: The Pursuit and Capture of America's Most Wanted Computer Outlaw, which he co-authored with Tsutomu Shimomura. What the Dormouse Said: How the Sixties Counterculture Shaped the Personal Computer Industry, was published by Viking Books. Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots, was published by HarperCollins Ecco. Markoff grew up in Palo Alto, California, and graduated from Whitman College, Walla Walla, Washington. He attended graduate school at the University of Oregon and received a masters degree in sociology.

HAI Weekly Seminar with Marietje Schaake
SeminarMar 20, 202011:00 AM - 12:00 PM
March
20
2020

Bio: Marietje Schaake is an International Policy Fellow at the Stanford Institute for Human-Centered Artificial Intelligence (HAI) and the International Policy Director of the Cyber Policy Center, where she conducts policy-relevant research focused on cyber policy recommendations for industry and government. In addition to her own research, she represents the center to governments, NGOs, and the technology industry. Schaake also teaches courses on cyber policy from an international perspective, and brings to Stanford leaders from around the world to discuss cyber policy.  Prior to joining Stanford, Marietje Schaake led an active career in politics and civic service. She was a representative of the Dutch Democratic Party and the Alliance of Liberals and Democrats for Europe (ALDE) in European Parliament where she was first elected in 2009. In European Parliament, Schaake focused on trade, foreign policy and technology, and as a member of the Global Commission on the Stability of Cyberspace, and founder of the European Parliament Intergroup on the European Digital Agenda, Schaake develops solutions to strengthen the rule of law online, including initiating the net neutrality law now in effect throughout Europe.

HAI Weekly Seminar with Marietje Schaake

Mar 20, 202011:00 AM - 12:00 PM

Bio: Marietje Schaake is an International Policy Fellow at the Stanford Institute for Human-Centered Artificial Intelligence (HAI) and the International Policy Director of the Cyber Policy Center, where she conducts policy-relevant research focused on cyber policy recommendations for industry and government. In addition to her own research, she represents the center to governments, NGOs, and the technology industry. Schaake also teaches courses on cyber policy from an international perspective, and brings to Stanford leaders from around the world to discuss cyber policy.  Prior to joining Stanford, Marietje Schaake led an active career in politics and civic service. She was a representative of the Dutch Democratic Party and the Alliance of Liberals and Democrats for Europe (ALDE) in European Parliament where she was first elected in 2009. In European Parliament, Schaake focused on trade, foreign policy and technology, and as a member of the Global Commission on the Stability of Cyberspace, and founder of the European Parliament Intergroup on the European Digital Agenda, Schaake develops solutions to strengthen the rule of law online, including initiating the net neutrality law now in effect throughout Europe.

HAI Weekly Seminar with Brian Cantwell Smith - Reckoning and Judgment: The Promise of AI
SeminarMar 06, 202011:00 AM - 12:00 PM
March
06
2020

Abstract: New developments in Artificial Intelligence, particularly deep learning and other forms of “second-wave” AI, are attracting enormous public attention.  Both triumphalists and doomsayers are predicting that human-level AI may be “just around the corner.”  To assess whether that prediction is true, we need a broad understanding of intelligence, in terms of which to assess: (i) what kinds of intelligence machines currently have, and will likely have in the future; and (ii) what kinds of intelligence people currently have, and may be capable of in the future.  As the first step in this direction, I distinguish two kinds of intelligence: (i) “reckoning,” the kind of calculative rationality that computers excel at, including both first- and second-wave AI; and (ii) “judgment,” a form of dispassionate, deliberative thought, grounded in ethical commitment and responsible action, that is appropriate to the situation in which it is deployed.  AI will develop world-changing reckoning systems, I argue, but nothing in AI as currently conceived approaches what is required to build a system capable of judgment. 

Bio: Brian Cantwell Smith is Reid Hoffman Professor of Artificial Intelligence and the Human at the University of Toronto, where he is also Professor of Information, Philosophy, Cognitive Science, and the History and Philosophy of Science and Technology, as well as being a Senior Fellow at Massey College.   Smith’s research focuses on the philosophical foundations of computation, artificial intelligence, and mind, and on fundamental issues in metaphysics and epistemology.  In the 1980s he developed the world’s first reflective programming language (3Lisp).  He is the author of *On the Origin of Objects* (MIT Press, 1996), and of *On the Promise of Artificial Intelligence: Reckoning and Judgment* (MIT Press, 2019).

HAI Weekly Seminar with Brian Cantwell Smith - Reckoning and Judgment: The Promise of AI

Mar 06, 202011:00 AM - 12:00 PM

Abstract: New developments in Artificial Intelligence, particularly deep learning and other forms of “second-wave” AI, are attracting enormous public attention.  Both triumphalists and doomsayers are predicting that human-level AI may be “just around the corner.”  To assess whether that prediction is true, we need a broad understanding of intelligence, in terms of which to assess: (i) what kinds of intelligence machines currently have, and will likely have in the future; and (ii) what kinds of intelligence people currently have, and may be capable of in the future.  As the first step in this direction, I distinguish two kinds of intelligence: (i) “reckoning,” the kind of calculative rationality that computers excel at, including both first- and second-wave AI; and (ii) “judgment,” a form of dispassionate, deliberative thought, grounded in ethical commitment and responsible action, that is appropriate to the situation in which it is deployed.  AI will develop world-changing reckoning systems, I argue, but nothing in AI as currently conceived approaches what is required to build a system capable of judgment. 

Bio: Brian Cantwell Smith is Reid Hoffman Professor of Artificial Intelligence and the Human at the University of Toronto, where he is also Professor of Information, Philosophy, Cognitive Science, and the History and Philosophy of Science and Technology, as well as being a Senior Fellow at Massey College.   Smith’s research focuses on the philosophical foundations of computation, artificial intelligence, and mind, and on fundamental issues in metaphysics and epistemology.  In the 1980s he developed the world’s first reflective programming language (3Lisp).  He is the author of *On the Origin of Objects* (MIT Press, 1996), and of *On the Promise of Artificial Intelligence: Reckoning and Judgment* (MIT Press, 2019).

AI for Underserved Billions in the Developing World
Mar 05, 20204:30 PM - 5:30 PM
March
05
2020

Rahul Panicker, Chief Innovation Officer, Wadhwani AI

Abstract: From our experience developing and deploying AI-for-social-good solutions to help healthcare workers in villages of developing countries weigh newborns using just a smartphone, cotton farmers fight pest attacks, and tuberculosis-control programs find and support TB patients, and advising organizations like the WHO, UN ITU, and governments on AI, I will share some lessons learned, and opportunities for AI to have large-scale impact across domains like health, agriculture, education, and financial inclusion. Such impact will require novel approaches across algorithms, human factors, regulatory frameworks, and systems thinking. AI-for-social-good also offers a rich source of problems for AI, spanning computer vision, weakly-supervised learning, causal reasoning, domain adaptation, uncertainty calibration, explainability, computing on low-resource devices, and privacy-preserving learning. The Wadhwani Institute for Artificial Intelligence is an independent nonprofit research institute that develops and deploys AI-for-social-good solutions in the developing world. Bio: Dr. Rahul Panicker, as Chief Innovation Officer, heads research at the Wadhwani Institute for Artificial Intelligence. Prior to this, he was co-founder of Embrace, a for-profit social enterprise that has helped over 500,000 babies worldwide through low-cost incubators that work without electricity. He is an MIT TR35 awardee, World Economic Forum Social Entrepreneur of the Year, Industrial Design Society of America Gold winner, and an Echoing Green Fellow. He holds an MS/PhD in EE from Stanford University, is an alumnus of the Stanford d.school, and has a B.Tech from IIT Madras.

AI for Underserved Billions in the Developing World

Mar 05, 20204:30 PM - 5:30 PM

Rahul Panicker, Chief Innovation Officer, Wadhwani AI

Abstract: From our experience developing and deploying AI-for-social-good solutions to help healthcare workers in villages of developing countries weigh newborns using just a smartphone, cotton farmers fight pest attacks, and tuberculosis-control programs find and support TB patients, and advising organizations like the WHO, UN ITU, and governments on AI, I will share some lessons learned, and opportunities for AI to have large-scale impact across domains like health, agriculture, education, and financial inclusion. Such impact will require novel approaches across algorithms, human factors, regulatory frameworks, and systems thinking. AI-for-social-good also offers a rich source of problems for AI, spanning computer vision, weakly-supervised learning, causal reasoning, domain adaptation, uncertainty calibration, explainability, computing on low-resource devices, and privacy-preserving learning. The Wadhwani Institute for Artificial Intelligence is an independent nonprofit research institute that develops and deploys AI-for-social-good solutions in the developing world. Bio: Dr. Rahul Panicker, as Chief Innovation Officer, heads research at the Wadhwani Institute for Artificial Intelligence. Prior to this, he was co-founder of Embrace, a for-profit social enterprise that has helped over 500,000 babies worldwide through low-cost incubators that work without electricity. He is an MIT TR35 awardee, World Economic Forum Social Entrepreneur of the Year, Industrial Design Society of America Gold winner, and an Echoing Green Fellow. He holds an MS/PhD in EE from Stanford University, is an alumnus of the Stanford d.school, and has a B.Tech from IIT Madras.
HAI Weekly Seminar with Thomas Dimson - Algorithms Algorithms Algorithms
SeminarFeb 28, 202011:00 AM - 12:00 PM
February
28
2020

Abstract: The biggest challenge with the democratization of content is how to make sense of the scale. In the last decade, curation of content has consolidated into the hands of a few of the largest technology companies. Today, that curation takes the form of machine learning — often dubbed algorithms by the media. Thomas helped build and introduce the most controversial algorithms of Instagram: non-chronological feed and personalized recommendations. He will discuss challenges from the perspective of an engineer in the control room as Instagram scaled to serve over a billion people. Thomas will share a few of his thoughts about future directions as we start to form a dialogue about the responsibilities of platforms operating on a global scale.

Bio: Thomas Dimson is the original author of “The Algorithm” — the recommender systems behind Instagram's feed, stories and discovery surfaces. He joined Instagram as one of its first 50 employees in 2013, working for seven years as a principal engineer and eventually an engineering director. In that time, he also invented products such as the stories polling sticker, Hyperlapse, and engineering and was named one of the top ten most creative people in business by Fast Company. Thomas graduated from the University of Waterloo with a bachelor's of mathematics and received his master's in computer science from Stanford with a specialization in artificial intelligence.

HAI Weekly Seminar with Thomas Dimson - Algorithms Algorithms Algorithms

Feb 28, 202011:00 AM - 12:00 PM

Abstract: The biggest challenge with the democratization of content is how to make sense of the scale. In the last decade, curation of content has consolidated into the hands of a few of the largest technology companies. Today, that curation takes the form of machine learning — often dubbed algorithms by the media. Thomas helped build and introduce the most controversial algorithms of Instagram: non-chronological feed and personalized recommendations. He will discuss challenges from the perspective of an engineer in the control room as Instagram scaled to serve over a billion people. Thomas will share a few of his thoughts about future directions as we start to form a dialogue about the responsibilities of platforms operating on a global scale.

Bio: Thomas Dimson is the original author of “The Algorithm” — the recommender systems behind Instagram's feed, stories and discovery surfaces. He joined Instagram as one of its first 50 employees in 2013, working for seven years as a principal engineer and eventually an engineering director. In that time, he also invented products such as the stories polling sticker, Hyperlapse, and engineering and was named one of the top ten most creative people in business by Fast Company. Thomas graduated from the University of Waterloo with a bachelor's of mathematics and received his master's in computer science from Stanford with a specialization in artificial intelligence.

Psychology-Neuroscience-Artificial Intelligence, Part 2
WorkshopFeb 27, 202012:00 PM - 2:00 PM
February
27
2020

Psychology-Neuroscience-Artificial Intelligence, Part 2

Feb 27, 202012:00 PM - 2:00 PM
AI and International Security
WorkshopFeb 26, 20209:00 AM - 12:00 PM
February
26
2020

Workshop Leader: Emilie Silva

A one-day interdisciplinary workshop involving Stanford faculty and researchers, a select number of outside academics from other institutions, and a small number of private sector and governmental analysts to focus on the intersection of AI and various aspects of international security. The goal was to identify concrete research agendas and synergies, identify gaps in our understanding, and build a network of scholars and experts to address these challenges.

The past several years have seen startling advances in artificial intelligence and machine learning,
driven in part by advances in deep neural networks.3 AI-enabled machines can now meet or exceed
human abilities in a wide range of tasks, including chess, Jeopardy, Go, poker, object recognition,
and driving in some settings. AI systems are being applied to solve a range of problems in
transportation, finance, stock trading, health care, intelligence analysis, and cybersecurity. Despite
calls from prominent scientists to avoid militarizing AI,4 nation-states are certain to use AI and
machine learning tools for national security purposes.

A technology that has the potential for such sweeping changes across human society should be
evaluated for its potential effects on international stability. Many national security applications of
AI could be beneficial, such as advanced cyber defenses that can identify new malware, automated
computer security tools to find and patch vulnerabilities, or machine learning systems to uncover
suspicious behavior by terrorists. Current AI systems have substantial limitations and
vulnerabilities, however, and a headlong rush into national security applications of artificial
intelligence could pose risks to international stability. Some security related applications of AI
could be destabilizing, and competitive dynamics between nations could lead to harmful
consequences such as a “race to the bottom” on AI safety. Other security related applications of AI
could improve international stability.

CNAS is undertaking a two-year, in-depth, interdisciplinary project to examine how artificial
intelligence will influence international security and stability. It is critical for global stability to
begin to a discussion about ways to mitigate the risks while taking advantage of the benefits of
autonomous systems and artificial intelligence. This project will build a community from three
sectors of academia, business, and the policy world that often do not intersect – AI researchers in
academia and business; international security academic experts; and policy practitioners in the
government, both civilian and military. Through a series of workshops, commissioned papers, and
reports, this project will foster a community of practice and begin laying the foundations for a field
of study on AI and international security. The project will conclude with recommendations to
policymakers for ways to capitalize on the potential stabilizing benefits of artificial intelligence,
while avoiding uses that could undermine stability.

AI and International Security

Feb 26, 20209:00 AM - 12:00 PM

Workshop Leader: Emilie Silva

A one-day interdisciplinary workshop involving Stanford faculty and researchers, a select number of outside academics from other institutions, and a small number of private sector and governmental analysts to focus on the intersection of AI and various aspects of international security. The goal was to identify concrete research agendas and synergies, identify gaps in our understanding, and build a network of scholars and experts to address these challenges.

The past several years have seen startling advances in artificial intelligence and machine learning,
driven in part by advances in deep neural networks.3 AI-enabled machines can now meet or exceed
human abilities in a wide range of tasks, including chess, Jeopardy, Go, poker, object recognition,
and driving in some settings. AI systems are being applied to solve a range of problems in
transportation, finance, stock trading, health care, intelligence analysis, and cybersecurity. Despite
calls from prominent scientists to avoid militarizing AI,4 nation-states are certain to use AI and
machine learning tools for national security purposes.

A technology that has the potential for such sweeping changes across human society should be
evaluated for its potential effects on international stability. Many national security applications of
AI could be beneficial, such as advanced cyber defenses that can identify new malware, automated
computer security tools to find and patch vulnerabilities, or machine learning systems to uncover
suspicious behavior by terrorists. Current AI systems have substantial limitations and
vulnerabilities, however, and a headlong rush into national security applications of artificial
intelligence could pose risks to international stability. Some security related applications of AI
could be destabilizing, and competitive dynamics between nations could lead to harmful
consequences such as a “race to the bottom” on AI safety. Other security related applications of AI
could improve international stability.

CNAS is undertaking a two-year, in-depth, interdisciplinary project to examine how artificial
intelligence will influence international security and stability. It is critical for global stability to
begin to a discussion about ways to mitigate the risks while taking advantage of the benefits of
autonomous systems and artificial intelligence. This project will build a community from three
sectors of academia, business, and the policy world that often do not intersect – AI researchers in
academia and business; international security academic experts; and policy practitioners in the
government, both civilian and military. Through a series of workshops, commissioned papers, and
reports, this project will foster a community of practice and begin laying the foundations for a field
of study on AI and international security. The project will conclude with recommendations to
policymakers for ways to capitalize on the potential stabilizing benefits of artificial intelligence,
while avoiding uses that could undermine stability.

AI for Good Seminar Series: AI for Human Rights
Feb 24, 2020
February
24
2020
Megan Price - Executive Director of the Human Rights Data Analysis Group Abstract:  As a team of scientists working as statisticians for human rights, the Human Rights Data Analysis Group (HRDAG) partners with human rights advocacy organizations to identify questions that can be answered and arguments that can be strengthened using data science.  Dr. Price’s talk will highlight how data science and AI methods and tools are being used to tell stories, build cases, and answer important questions about the human toll of conflicts in Syria, Mexico, and Guatemala. She will also address the potential harm that can be done when relying on incomplete and imperfect data in domestic situations such as predictive policing of drug use in Oakland. Bio:  As the Executive Director of the Human Rights Data Analysis Group, Megan Price designs strategies and methods for statistical analysis of human rights data for projects in a variety of locations including Guatemala, Colombia, and Syria. Her work in Guatemala includes serving as the lead statistician on a project in which she analyzes documents from the National Police Archive; she has also contributed analyses submitted as evidence in two court cases in Guatemala. Her work in Syria includes serving as the lead statistician and author on three reports, commissioned by the Office of the United Nations High Commissioner of Human Rights (OHCHR), on documented deaths in that country. Megan is a member of the Technical Advisory Board for the Office of the Prosecutor at the International Criminal Court, on the Board of Directors for Tor, and a Research Fellow at the Carnegie Mellon University Center for Human Rights Science. She is the Human Rights Editor for the Statistical Journal of the International Association for Official Statistics (IAOS) and on the editorial board of Significance Magazine. She earned her doctorate in biostatistics and a Certificate in Human Rights from the Rollins School of Public Health at Emory University. She also holds a master of science degree and bachelor of science degree in Statistics from Case Western Reserve University. 

AI for Good Seminar Series: AI for Human Rights

Feb 24, 2020
Megan Price - Executive Director of the Human Rights Data Analysis Group Abstract:  As a team of scientists working as statisticians for human rights, the Human Rights Data Analysis Group (HRDAG) partners with human rights advocacy organizations to identify questions that can be answered and arguments that can be strengthened using data science.  Dr. Price’s talk will highlight how data science and AI methods and tools are being used to tell stories, build cases, and answer important questions about the human toll of conflicts in Syria, Mexico, and Guatemala. She will also address the potential harm that can be done when relying on incomplete and imperfect data in domestic situations such as predictive policing of drug use in Oakland. Bio:  As the Executive Director of the Human Rights Data Analysis Group, Megan Price designs strategies and methods for statistical analysis of human rights data for projects in a variety of locations including Guatemala, Colombia, and Syria. Her work in Guatemala includes serving as the lead statistician on a project in which she analyzes documents from the National Police Archive; she has also contributed analyses submitted as evidence in two court cases in Guatemala. Her work in Syria includes serving as the lead statistician and author on three reports, commissioned by the Office of the United Nations High Commissioner of Human Rights (OHCHR), on documented deaths in that country. Megan is a member of the Technical Advisory Board for the Office of the Prosecutor at the International Criminal Court, on the Board of Directors for Tor, and a Research Fellow at the Carnegie Mellon University Center for Human Rights Science. She is the Human Rights Editor for the Statistical Journal of the International Association for Official Statistics (IAOS) and on the editorial board of Significance Magazine. She earned her doctorate in biostatistics and a Certificate in Human Rights from the Rollins School of Public Health at Emory University. She also holds a master of science degree and bachelor of science degree in Statistics from Case Western Reserve University. 
HAI Weekly Seminar with Garance Burke - Steering Journalism Towards Data Science
SeminarFeb 21, 202011:00 AM - 12:00 PM
February
21
2020
Abstract: Algorithmic tools are transforming our daily lives, but journalism is still playing catch up. As in other times of global transition, news consumers are anxious that artificial intelligence will overtake human abilities and question whether these systems will  take our jobs, amplify racial bias or expose our privacy. As one of few technically trained data journalists, it’s clear to me that most newsrooms lack the training to understand how algorithms work, let alone how they are deployed to guide crucial decisions in hiring, banking, criminal justice and medicine. And the rapidly expanding field of algorithmic accountability reporting has yet to be codified in simple terms that most reporters can understand. Naturally, this leads to questions: How can we ensure that reporters ask the right questions? Or that a larger group of journalists can access work examining the technology's impacts on society? How can we encourage nuanced journalism about AI that accurately reflects the state of science? As an inaugural 2020 Human Centered Artificial Intelligence-John S. Knight journalism fellow, I am developing a new set of journalistic best practices to provide reporters and editors with scientifically rigorous standards for algorithmic accountability reporting. Bio: Garance Burke is an investigative journalist who applies her training in statistical analysis to reveal vital truths in the public interest. Often driven by data, her work for The Associated Press on topics ranging from immigration to cybersecurity has helped to shape presidential elections, inspire congressional hearings and spark federal investigations. As an inaugural 2020 Institute for Human-Centered Artificial Intelligence-John S. Knight Journalism fellow, she is deepening her data science skills to draft standards that will help train more reporters to produce deeper stories about the algorithmic systems they encounter on their beats. In 2019, her stories were honored as a finalist for the Pulitzer Prize in national reporting and the Anthony Shadid Award for Journalism Ethics, and received the Robert F. Kennedy Journalism Award and the National Press Club Award for Diplomatic Correspondence. Burke began her career at the Mexican financial newspaper El Financiero, then worked in Mexico City for The Washington Post and The Boston Globe. She received dual master’s degrees from the University of California, Berkeley’s Goldman School of Public Policy and Graduate School of Journalism, where she has taught as a lecturer in basic data journalism. 

HAI Weekly Seminar with Garance Burke - Steering Journalism Towards Data Science

Feb 21, 202011:00 AM - 12:00 PM
Abstract: Algorithmic tools are transforming our daily lives, but journalism is still playing catch up. As in other times of global transition, news consumers are anxious that artificial intelligence will overtake human abilities and question whether these systems will  take our jobs, amplify racial bias or expose our privacy. As one of few technically trained data journalists, it’s clear to me that most newsrooms lack the training to understand how algorithms work, let alone how they are deployed to guide crucial decisions in hiring, banking, criminal justice and medicine. And the rapidly expanding field of algorithmic accountability reporting has yet to be codified in simple terms that most reporters can understand. Naturally, this leads to questions: How can we ensure that reporters ask the right questions? Or that a larger group of journalists can access work examining the technology's impacts on society? How can we encourage nuanced journalism about AI that accurately reflects the state of science? As an inaugural 2020 Human Centered Artificial Intelligence-John S. Knight journalism fellow, I am developing a new set of journalistic best practices to provide reporters and editors with scientifically rigorous standards for algorithmic accountability reporting. Bio: Garance Burke is an investigative journalist who applies her training in statistical analysis to reveal vital truths in the public interest. Often driven by data, her work for The Associated Press on topics ranging from immigration to cybersecurity has helped to shape presidential elections, inspire congressional hearings and spark federal investigations. As an inaugural 2020 Institute for Human-Centered Artificial Intelligence-John S. Knight Journalism fellow, she is deepening her data science skills to draft standards that will help train more reporters to produce deeper stories about the algorithmic systems they encounter on their beats. In 2019, her stories were honored as a finalist for the Pulitzer Prize in national reporting and the Anthony Shadid Award for Journalism Ethics, and received the Robert F. Kennedy Journalism Award and the National Press Club Award for Diplomatic Correspondence. Burke began her career at the Mexican financial newspaper El Financiero, then worked in Mexico City for The Washington Post and The Boston Globe. She received dual master’s degrees from the University of California, Berkeley’s Goldman School of Public Policy and Graduate School of Journalism, where she has taught as a lecturer in basic data journalism. 
HAI Weekly Seminar with Lucy Suchman - Demystifying AI as an Ethical Project
SeminarFeb 14, 2020
February
14
2020

This talk develops the proposal that a central – and neglected – ethical challenge for the field of AI is demystification of the techniques and technologies that constitute it

HAI Weekly Seminar with Lucy Suchman - Demystifying AI as an Ethical Project

Feb 14, 2020

This talk develops the proposal that a central – and neglected – ethical challenge for the field of AI is demystification of the techniques and technologies that constitute it

Ethics, Equity, Inclusion
HAI Monthly Community Building Reception - A Conversation about AI Governance
Feb 11, 20204:00 PM - 5:30 PM
February
11
2020

Join California Supreme Court Justice Cuéllar, who teaches the popular “Regulating AI” course at Stanford, Dan Ho, Associate Director of HAI and professor at the law school and political science, and Terah Lyons, the Founding Executive Director of the Partnership on AI, for a conversation on the law, regulation, and governance of AI!  The three will provide a range of perspectives on the promise, challenges, and directions for AI governance.

HAI Monthly Community Building Reception - A Conversation about AI Governance

Feb 11, 20204:00 PM - 5:30 PM

Join California Supreme Court Justice Cuéllar, who teaches the popular “Regulating AI” course at Stanford, Dan Ho, Associate Director of HAI and professor at the law school and political science, and Terah Lyons, the Founding Executive Director of the Partnership on AI, for a conversation on the law, regulation, and governance of AI!  The three will provide a range of perspectives on the promise, challenges, and directions for AI governance.

Government, Public Administration
Law Enforcement and Justice
AI for Good Seminar Series: AI for Healthcare
Feb 10, 20204:30 PM - 5:30 PM
February
10
2020

AI for Healthcare session will feature Marzyeh Ghassemi who targets “Healthy ML” focusing on creating and applying machine learning to understand and improve health. Improving health requires targeting and evidence. Marzyeh tackles part of this puzzle with machine learning. This session will cover some of the novel technical opportunities for machine learning in health challenges and the important progress to be made with a careful application to domain. She will also walk through the danger of applying methods without a robust understanding of the domain, and potential downstream uses.

AI for Good Seminar Series: AI for Healthcare

Feb 10, 20204:30 PM - 5:30 PM

AI for Healthcare session will feature Marzyeh Ghassemi who targets “Healthy ML” focusing on creating and applying machine learning to understand and improve health. Improving health requires targeting and evidence. Marzyeh tackles part of this puzzle with machine learning. This session will cover some of the novel technical opportunities for machine learning in health challenges and the important progress to be made with a careful application to domain. She will also walk through the danger of applying methods without a robust understanding of the domain, and potential downstream uses.

Healthcare
HAI Weekly Seminar with Bongjun Ko - The Value of Data: An Engineer’s Perspective
SeminarFeb 07, 202011:00 AM - 12:00 PM
February
07
2020

Recent advances of artificial intelligence and deep learning have been undoubtedly driven by a large amount of data amassed over the years, helping firms, researchers, and practitioners achieve many amazing feats, most notably in recognition tasks often surpassing human ability in several benchmarks.

HAI Weekly Seminar with Bongjun Ko - The Value of Data: An Engineer’s Perspective

Feb 07, 202011:00 AM - 12:00 PM

Recent advances of artificial intelligence and deep learning have been undoubtedly driven by a large amount of data amassed over the years, helping firms, researchers, and practitioners achieve many amazing feats, most notably in recognition tasks often surpassing human ability in several benchmarks.

Machine Learning
12
13
14
15
16