Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.
Sign Up For Latest News
AI promises to transform how government agencies work. Where will it have the biggest impact? What are some challenges around transparency, privacy, bias, and accountability? This talk will go beyond the headlines and share highlights of a just-completed report on AI in the US Government.
AI promises to transform how government agencies work. Where will it have the biggest impact? What are some challenges around transparency, privacy, bias, and accountability? This talk will go beyond the headlines and share highlights of a just-completed report on AI in the US Government.
How can AI and machine learning be leveraged to mitigate the impact of human activities on earth’s natural systems? Learn about data science tools and strategies being used to safeguard our water supply, feed the worldwide human population, and promote greater biodiversity and global sustainability.
How can AI and machine learning be leveraged to mitigate the impact of human activities on earth’s natural systems? Learn about data science tools and strategies being used to safeguard our water supply, feed the worldwide human population, and promote greater biodiversity and global sustainability.
Bias in government automated decision systems, the future of farmwork, digital literacy, algorithms in bail decisions, and more.
Bias in government automated decision systems, the future of farmwork, digital literacy, algorithms in bail decisions, and more.
The content shared on social media is among the largest data sets on human behavior in history. I leverage this data to address questions in the psychological sciences. Specifically, I apply natural language processing and machine learning to characterize and measure psychological phenomena with a focus on mental and physical health.
The content shared on social media is among the largest data sets on human behavior in history. I leverage this data to address questions in the psychological sciences. Specifically, I apply natural language processing and machine learning to characterize and measure psychological phenomena with a focus on mental and physical health.
Mykel Kochenderfer, Assistant Professor of Aeronautics and Astronautics and Assistant Professor, by courtesy, of Computer Science at Stanford University
Mykel is Assistant Professor of Aeronautics and Astronautics and Assistant Professor, by courtesy, of Computer Science at Stanford University. He is the director of the Stanford Intelligent Systems Laboratory (SISL), conducting research on advanced algorithms and analytical methods for the design of robust decision making systems. Of particular interest are systems for air traffic control, unmanned aircraft, and automated driving where decisions must be made in uncertain, dynamic environments while maintaining safety and efficiency. Research at SISL focuses on efficient computational methods for deriving optimal decision strategies from high-dimensional, probabilistic problem representations. Prior to joining the faculty in 2013, he was at MIT Lincoln Laboratory where he worked on airspace modeling and aircraft collision avoidance. He received his Ph.D. from the University of Edinburgh in 2006 where he studied at the Institute of Perception, Action and Behaviour in the School of Informatics. He received B.S. and M.S. degrees in computer science from Stanford University in 2003. Prof. Kochenderfer is the director SAIL-Toyota Center for AI Research and a co-director of the Center for AI Safety. He is affiliated with the Stanford Artificial Intelligence Laboratory (SAIL), the Human-Centered AI (HAI) Institute, the Symbolic Systems Program, the Bio-X Institute, Wu Tsai Neurosciences Institute, and the Center for Automotive Research at Stanford (CARS). In 2017, he was awarded the DARPA Young Faculty Award. He is an associate editor of the Journal of Artificial Intelligence Research and the Journal of Aerospace Information Systems. He is an author of the textbooks Decision Making under Uncertainty: Theory and Application (MIT Press, 2015) and Algorithms for Optimization (MIT Press, 2019). He is a third-generation pilot.Bryan Casey, Legal Fellow at the Center for Automotive Research at Stanford University Bryan Casey is a Legal Fellow at the Center for Automotive Research at Stanford, a Lecturer at Stanford Law School, and an affiliate scholar at the Stanford Machine Leaning Group, CodeX: The Center for Legal Informatics, and the Transatlantic Technology Law Forum. His research covers a broad range of issues at the intersection of law and emerging artificial intelligence technologies—particularly those involving transportation systems. He was written extensively on the legal implications of machine decision making, algorithmic explanability, and the role of lawyers as gatekeepers overseeing the deployment of AI-embedded products. Bryan’s scholarship has appeared in Northwestern University Law Review, Berkeley Technology Law Journal, and Stanford Law Review Online, among other journals. He also regularly comments in media outlets including CNN, Wired Magazine, Futurism, and The Stanford Lawyer. His recent work focuses on the competing roles of legality, morality, and profit-maximization in commercial AI systems with significant social impacts. And his 2018-2019 course offerings at Stanford Law School include The Future of Algorithms and Lawyering for Innovation: Artificial Intelligence. Clark Barrett, Associate Professor (Research) of Computer Science, Stanford University Clark Barrett joined Stanford University as an Associate Professor (Research) of Computer Science in September 2016. Before that, he was an Associate Professor of Computer Science at the Courant Institute of Mathematical Sciences at New York University. His expertise is in constraint solving and its applications to system verification and security. His PhD dissertation introduced a novel approach to constraint solving now known as Satisfiability Modulo Theories (SMT). Today, he is recognized as one of the world's experts in the development and application of SMT techniques. He was also an early pioneer in the development of formal hardware verification: at Intel, he collaborated on a novel theorem prover used to verify key microprocessor properties; and at 0-in Design Automation (now part of Mentor Graphics), he helped build one of the first industrially successful assertion-based verification tool-sets for hardware. He is an ACM Distinguished Scientist. Chris Gerdes, Professor of Mechanical Engineering, Director of the Center for Automotive Research (CARS), and Director of the Revs Program, Stanford University Chris studies how cars move, how humans drive cars and how to design future cars that work cooperatively with the driver or drive themselves. When not teaching on campus, he can often be found at the racetrack with students, instrumenting historic race cars or trying out their latest prototypes for the future. Vehicles in the lab include X1, an entirely student-built test vehicle, and Shelley, an Audi TT-S capable of turning a competitive lap time around the track without a human driver. Professor Gerdes and his team have been recognized with a number of awards including the Presidential Early Career Award for Scientists and Engineers, the Ralph Teetor award from SAE International and the Rudolf Kalman Award from the American Society of Mechanical Engineers.Mykel Kochenderfer, Assistant Professor of Aeronautics and Astronautics and Assistant Professor, by courtesy, of Computer Science at Stanford University
Mykel is Assistant Professor of Aeronautics and Astronautics and Assistant Professor, by courtesy, of Computer Science at Stanford University. He is the director of the Stanford Intelligent Systems Laboratory (SISL), conducting research on advanced algorithms and analytical methods for the design of robust decision making systems. Of particular interest are systems for air traffic control, unmanned aircraft, and automated driving where decisions must be made in uncertain, dynamic environments while maintaining safety and efficiency. Research at SISL focuses on efficient computational methods for deriving optimal decision strategies from high-dimensional, probabilistic problem representations. Prior to joining the faculty in 2013, he was at MIT Lincoln Laboratory where he worked on airspace modeling and aircraft collision avoidance. He received his Ph.D. from the University of Edinburgh in 2006 where he studied at the Institute of Perception, Action and Behaviour in the School of Informatics. He received B.S. and M.S. degrees in computer science from Stanford University in 2003. Prof. Kochenderfer is the director SAIL-Toyota Center for AI Research and a co-director of the Center for AI Safety. He is affiliated with the Stanford Artificial Intelligence Laboratory (SAIL), the Human-Centered AI (HAI) Institute, the Symbolic Systems Program, the Bio-X Institute, Wu Tsai Neurosciences Institute, and the Center for Automotive Research at Stanford (CARS). In 2017, he was awarded the DARPA Young Faculty Award. He is an associate editor of the Journal of Artificial Intelligence Research and the Journal of Aerospace Information Systems. He is an author of the textbooks Decision Making under Uncertainty: Theory and Application (MIT Press, 2015) and Algorithms for Optimization (MIT Press, 2019). He is a third-generation pilot.Bryan Casey, Legal Fellow at the Center for Automotive Research at Stanford University Bryan Casey is a Legal Fellow at the Center for Automotive Research at Stanford, a Lecturer at Stanford Law School, and an affiliate scholar at the Stanford Machine Leaning Group, CodeX: The Center for Legal Informatics, and the Transatlantic Technology Law Forum. His research covers a broad range of issues at the intersection of law and emerging artificial intelligence technologies—particularly those involving transportation systems. He was written extensively on the legal implications of machine decision making, algorithmic explanability, and the role of lawyers as gatekeepers overseeing the deployment of AI-embedded products. Bryan’s scholarship has appeared in Northwestern University Law Review, Berkeley Technology Law Journal, and Stanford Law Review Online, among other journals. He also regularly comments in media outlets including CNN, Wired Magazine, Futurism, and The Stanford Lawyer. His recent work focuses on the competing roles of legality, morality, and profit-maximization in commercial AI systems with significant social impacts. And his 2018-2019 course offerings at Stanford Law School include The Future of Algorithms and Lawyering for Innovation: Artificial Intelligence. Clark Barrett, Associate Professor (Research) of Computer Science, Stanford University Clark Barrett joined Stanford University as an Associate Professor (Research) of Computer Science in September 2016. Before that, he was an Associate Professor of Computer Science at the Courant Institute of Mathematical Sciences at New York University. His expertise is in constraint solving and its applications to system verification and security. His PhD dissertation introduced a novel approach to constraint solving now known as Satisfiability Modulo Theories (SMT). Today, he is recognized as one of the world's experts in the development and application of SMT techniques. He was also an early pioneer in the development of formal hardware verification: at Intel, he collaborated on a novel theorem prover used to verify key microprocessor properties; and at 0-in Design Automation (now part of Mentor Graphics), he helped build one of the first industrially successful assertion-based verification tool-sets for hardware. He is an ACM Distinguished Scientist. Chris Gerdes, Professor of Mechanical Engineering, Director of the Center for Automotive Research (CARS), and Director of the Revs Program, Stanford University Chris studies how cars move, how humans drive cars and how to design future cars that work cooperatively with the driver or drive themselves. When not teaching on campus, he can often be found at the racetrack with students, instrumenting historic race cars or trying out their latest prototypes for the future. Vehicles in the lab include X1, an entirely student-built test vehicle, and Shelley, an Audi TT-S capable of turning a competitive lap time around the track without a human driver. Professor Gerdes and his team have been recognized with a number of awards including the Presidential Early Career Award for Scientists and Engineers, the Ralph Teetor award from SAE International and the Rudolf Kalman Award from the American Society of Mechanical Engineers.Breakthroughs in technology often have humble origins. Through it's Google AI Impact Challenge grant program, Google.org lends a helping hand to nonprofit innovators and social entrepreneurs who are using the power of AI to address social and environmental challenges. This session will feature a panel of Google.org Impact Challenge Grantees who are using AI and machine learning to tackle issues affecting the environment, educational equity, at-risk youth, and mental health.
Breakthroughs in technology often have humble origins. Through it's Google AI Impact Challenge grant program, Google.org lends a helping hand to nonprofit innovators and social entrepreneurs who are using the power of AI to address social and environmental challenges. This session will feature a panel of Google.org Impact Challenge Grantees who are using AI and machine learning to tackle issues affecting the environment, educational equity, at-risk youth, and mental health.
Most deep learning networks today rely on dense representations. This is in stark contrast to our brains which are extremely sparse.
Most deep learning networks today rely on dense representations. This is in stark contrast to our brains which are extremely sparse.
Disruptive new technologies are often heralded for their power to transform industries, increase efficiency, and improve lives. However, emerging technologies such as artificial intelligence and quantum computing don’t just disrupt industries: they disrupt the workforce.
Disruptive new technologies are often heralded for their power to transform industries, increase efficiency, and improve lives. However, emerging technologies such as artificial intelligence and quantum computing don’t just disrupt industries: they disrupt the workforce.
Rob Reich, Associate Director, HAI
Rob is professor of political science and, by courtesy, professor of philosophy and at the Graduate School of Education, at Stanford University. He is the director of the Center for Ethics in Society and faculty co-director of the Center on Philanthropy and Civil Society (publisher of the Stanford Social Innovation Review), both at Stanford University. He is also associate director of the Institute on Human-Centered Artificial Intelligence.
He is the author or editor of several books on education and a book on the relationship between philanthropy, democracy, and justice: Just Giving: Why Philanthropy is Failing Democracy and How It Can Do Better (Princeton University Press 2018) and Philanthropy in Democratic Societies (edited with Chiara Cordelli and Lucy Bernholz). His current work focuses on ethics and technology, and he is editing a new volume called Digital Technology and Democracy (with Lucy Bernholz and Helene Landemore). He is the recipient of multiple teaching awards, including the Phi Beta Kappa Undergraduate Teaching Award and the Walter J. Gores Award, Stanford University. He is currently a University Fellow in Undergraduate Education at Stanford. He is a board member of the Spencer Foundation and the magazine Boston Review. Kate Vredenburgh, HAI-EIS Fellow Kate received Ph.D. in philosophy from Harvard University. She works mainly on questions in the philosophy of social science and political philosophy. The overarching motivation guiding her research is to understand how background commitments influence modeling in the social sciences and computer science, to reflect on how they should, and to build fairer models on that basis. She also works on political and ethical questions inspired by the use of technology and social science by corporations and by governments. For example, Kate is currently working on a project arguing for a right to explanation, inspired by recent discussions surrounding the EU's General Data Protection Regulation (GDPR) and interpretability in computer science. Kate will join the Center for Ethics as an interdisciplinary ethics fellow in partnership with the Stanford Institute for Human-Centered Artificial Intelligence. Todd Karhu, HAI-EIS Fellow Todd received his Ph.D. in philosophy from the London School of Economics. Before LSE, he completed an M.Phil. in political theory at Oxford University. His doctoral dissertation focuses on theoretical and practical issues in the ethics of killing, and a few other normative matters involving death. On the theoretical side, he has worked on the relationship between the wrongness of killing and the badness of death and about how killing and dying relate to the metaphysics of time. On the more practical side, he has worked on the question of the extent of one's right to self-defense in the context of war and the moral duties people incur in virtue of killing others.Rob Reich, Associate Director, HAI
Rob is professor of political science and, by courtesy, professor of philosophy and at the Graduate School of Education, at Stanford University. He is the director of the Center for Ethics in Society and faculty co-director of the Center on Philanthropy and Civil Society (publisher of the Stanford Social Innovation Review), both at Stanford University. He is also associate director of the Institute on Human-Centered Artificial Intelligence.
He is the author or editor of several books on education and a book on the relationship between philanthropy, democracy, and justice: Just Giving: Why Philanthropy is Failing Democracy and How It Can Do Better (Princeton University Press 2018) and Philanthropy in Democratic Societies (edited with Chiara Cordelli and Lucy Bernholz). His current work focuses on ethics and technology, and he is editing a new volume called Digital Technology and Democracy (with Lucy Bernholz and Helene Landemore). He is the recipient of multiple teaching awards, including the Phi Beta Kappa Undergraduate Teaching Award and the Walter J. Gores Award, Stanford University. He is currently a University Fellow in Undergraduate Education at Stanford. He is a board member of the Spencer Foundation and the magazine Boston Review. Kate Vredenburgh, HAI-EIS Fellow Kate received Ph.D. in philosophy from Harvard University. She works mainly on questions in the philosophy of social science and political philosophy. The overarching motivation guiding her research is to understand how background commitments influence modeling in the social sciences and computer science, to reflect on how they should, and to build fairer models on that basis. She also works on political and ethical questions inspired by the use of technology and social science by corporations and by governments. For example, Kate is currently working on a project arguing for a right to explanation, inspired by recent discussions surrounding the EU's General Data Protection Regulation (GDPR) and interpretability in computer science. Kate will join the Center for Ethics as an interdisciplinary ethics fellow in partnership with the Stanford Institute for Human-Centered Artificial Intelligence. Todd Karhu, HAI-EIS Fellow Todd received his Ph.D. in philosophy from the London School of Economics. Before LSE, he completed an M.Phil. in political theory at Oxford University. His doctoral dissertation focuses on theoretical and practical issues in the ethics of killing, and a few other normative matters involving death. On the theoretical side, he has worked on the relationship between the wrongness of killing and the badness of death and about how killing and dying relate to the metaphysics of time. On the more practical side, he has worked on the question of the extent of one's right to self-defense in the context of war and the moral duties people incur in virtue of killing others.
This workshop focused on “Uncertainty in AI Situations” asks researchers to consider what
an AI can do when faced with uncertainty. Machine learning algorithms whose
classifications rely on posterior probabilities of membership often present ambiguous
results, where due to unavailable training data or ambiguous cases, the likelihood of any
outcome is approximately even. In such situations, the human programmers must decide
how the machine handles ambiguity: whether making a “best-fit” classification or reporting
potential error, there is always a potential conflict between the mathematical rigor of the
model and the ambiguity of real-world use cases.
Some questions asked that begin the process of advancing AI to a new intellectual understanding of the trickiest problems in the machine-learning environment.
• How do researchers create training sets that engage with uncertainty, particularly
when deciding between reflecting real-world data and curating data sets to avoid
bias?
• How can we frame ontologies, typologies, and epistemologies that can account for,
and help solve, ambiguity in data and indecision in AI?
This workshop focused on “Uncertainty in AI Situations” asks researchers to consider what
an AI can do when faced with uncertainty. Machine learning algorithms whose
classifications rely on posterior probabilities of membership often present ambiguous
results, where due to unavailable training data or ambiguous cases, the likelihood of any
outcome is approximately even. In such situations, the human programmers must decide
how the machine handles ambiguity: whether making a “best-fit” classification or reporting
potential error, there is always a potential conflict between the mathematical rigor of the
model and the ambiguity of real-world use cases.
Some questions asked that begin the process of advancing AI to a new intellectual understanding of the trickiest problems in the machine-learning environment.
• How do researchers create training sets that engage with uncertainty, particularly
when deciding between reflecting real-world data and curating data sets to avoid
bias?
• How can we frame ontologies, typologies, and epistemologies that can account for,
and help solve, ambiguity in data and indecision in AI?
Van Ton-Quinlivan is a nationally recognized thought leader in workforce development, quoted in The New York Times, Chronicle of Higher Education, Stanford Social Innovation Review, U.S. News & World Report, and other publications.
Van Ton-Quinlivan is a nationally recognized thought leader in workforce development, quoted in The New York Times, Chronicle of Higher Education, Stanford Social Innovation Review, U.S. News & World Report, and other publications.
Twin revolutions at the start of the 21st century are shaking up the very idea of what it means to be human. Computer vision and image recognition are at the heart of the AI revolution. And CRISPR is a powerful new technique for genetic editing that allows humans to intervene in evolution.
Twin revolutions at the start of the 21st century are shaking up the very idea of what it means to be human. Computer vision and image recognition are at the heart of the AI revolution. And CRISPR is a powerful new technique for genetic editing that allows humans to intervene in evolution.
Good news! The near future has arrived and you’re ready to purchase your first fully autonomous vehicle. You have narrowed down your search to a few manufacturers and have just one decision left to make: How would you like your vehicle to respond if it finds itself in a potential collision with other autonomous vehicles?
Good news! The near future has arrived and you’re ready to purchase your first fully autonomous vehicle. You have narrowed down your search to a few manufacturers and have just one decision left to make: How would you like your vehicle to respond if it finds itself in a potential collision with other autonomous vehicles?
We will describe the Stanford Medicine Program for AI in Healthcare, which aims to bring AI into clinical use, safely and ethically. The session will begin with an overview of the effort and then focus on describing a project to improve palliative care using machine learning. We will summarize the creation and validation of a mortality prediction model, describe the associated care planning workflow it triggers and the work constraints it needs to function under. We will present preliminary results on an HAI supported project for understanding and addressing ethical challenges with implementation of machine learning to advance palliative care. Using this real-life example, we will elucidate several of the ethical challenges that need to be studied and addressed when combining artificial intelligence technologies with medical expertise to help doctors make faster, more informed and humane decisions.
We will describe the Stanford Medicine Program for AI in Healthcare, which aims to bring AI into clinical use, safely and ethically. The session will begin with an overview of the effort and then focus on describing a project to improve palliative care using machine learning. We will summarize the creation and validation of a mortality prediction model, describe the associated care planning workflow it triggers and the work constraints it needs to function under. We will present preliminary results on an HAI supported project for understanding and addressing ethical challenges with implementation of machine learning to advance palliative care. Using this real-life example, we will elucidate several of the ethical challenges that need to be studied and addressed when combining artificial intelligence technologies with medical expertise to help doctors make faster, more informed and humane decisions.
In this talk, Pamela Chen, 2020 Human-Centered AI and JSK Journalism Fellow at Stanford, shares her experiences leading an editorial team at Instagram as the company scaled content discovery to serve more than 1 billion monthly active users. Spoiler alert: it doesn’t go as planned.
In this talk, Pamela Chen, 2020 Human-Centered AI and JSK Journalism Fellow at Stanford, shares her experiences leading an editorial team at Instagram as the company scaled content discovery to serve more than 1 billion monthly active users. Spoiler alert: it doesn’t go as planned.
This conference is anchored and building on the release of the Special National Academy of Medicine (NAM) publication titled: “Artificial Intelligence in Healthcare: The Hope, The Hype, The Promise, The Peril.” Co-led by Michael Matheny and Sonoo Thadaney Israni.
Objectives At the conclusion of this activity, participants should be able to:Evaluate AI in the healthcare landscape
Critically assess the opportunities for AI in healthcare
Develop appropriate criteria for evaluating/deploying AI solutions
Build frameworks for creating and testing AI healthcare solutions
This conference is anchored and building on the release of the Special National Academy of Medicine (NAM) publication titled: “Artificial Intelligence in Healthcare: The Hope, The Hype, The Promise, The Peril.” Co-led by Michael Matheny and Sonoo Thadaney Israni.
Objectives At the conclusion of this activity, participants should be able to:Evaluate AI in the healthcare landscape
Critically assess the opportunities for AI in healthcare
Develop appropriate criteria for evaluating/deploying AI solutions
Build frameworks for creating and testing AI healthcare solutions
The next generation of user experiences will produce 1000-fold improvements in human capabilities. This new tools will amplify, augment, enhance, and empower people, just as the Web, email, search, navigation, digital photography, and many other applications have already done. Rather than emphasize autonomous machines and humanoid robots as team partners, these new tools will produce comprehensible, predictable, and controllable applications that promote self-efficacy, human responsibility, and social participation at scale. The goal is to ensure human control, while increasing the level of automation.
The next generation of user experiences will produce 1000-fold improvements in human capabilities. This new tools will amplify, augment, enhance, and empower people, just as the Web, email, search, navigation, digital photography, and many other applications have already done. Rather than emphasize autonomous machines and humanoid robots as team partners, these new tools will produce comprehensible, predictable, and controllable applications that promote self-efficacy, human responsibility, and social participation at scale. The goal is to ensure human control, while increasing the level of automation.
Conversations about ethics and AI are commonplace today, but they are often pitched at a high level of generality or abstraction. In this workshop, we gathered together leading young scholars, chiefly philosophers, to discuss a more detailed research agenda with a particular focus on moral and political philosophy and their intersections with AI. Topics included AI and explainability, AI and value alignment, governance of AI, and more.
Conversations about ethics and AI are commonplace today, but they are often pitched at a high level of generality or abstraction. In this workshop, we gathered together leading young scholars, chiefly philosophers, to discuss a more detailed research agenda with a particular focus on moral and political philosophy and their intersections with AI. Topics included AI and explainability, AI and value alignment, governance of AI, and more.