Abstract: The content shared on social media is among the largest data sets on human behavior in history. I leverage this data to address questions in the psychological sciences. Specifically, I apply natural language processing and machine learning to characterize and measure psychological phenomena with a focus on mental and physical health. For depression, I will show that machine learning models applied to Facebook status histories can predict future depression as documented in the medical records of a sample of patients. For heart disease, the leading cause of death, I demonstrate how prediction models derived from geo-tagged Tweets can estimate county mortality rates better than gold-standard epidemiological models, and at the same time give us insight into the sociocultural context of heart disease. I will also present preliminary findings on my emerging project to measure the subjective well-being of large populations. Across these studies, I argue that AI-based approaches to social media can augment clinical practice, guide prevention, and inform public policy.
Abstract: The content shared on social media is among the largest data sets on human behavior in history. I leverage this data to address questions in the psychological sciences. Specifically, I apply natural language processing and machine learning to characterize and measure psychological phenomena with a focus on mental and physical health. For depression, I will show that machine learning models applied to Facebook status histories can predict future depression as documented in the medical records of a sample of patients. For heart disease, the leading cause of death, I demonstrate how prediction models derived from geo-tagged Tweets can estimate county mortality rates better than gold-standard epidemiological models, and at the same time give us insight into the sociocultural context of heart disease. I will also present preliminary findings on my emerging project to measure the subjective well-being of large populations. Across these studies, I argue that AI-based approaches to social media can augment clinical practice, guide prevention, and inform public policy.
Due to some unforeseen circumstances we regretfully have to cancel this event. We hope to schedule this talk at a later date.
We must separate the technical endeavor of imbuing artificial systems with moral agency from philosophical questions about the ethics of doing so. Both are important ventures but they are distinct and the nature of the latter is dependent on the nature of the former. Failure to separate these two might lead to much wasted time reasoning about the ethics of technologies which will never or could never be created. However, in the context of a specific system engaged in artificial ethics, meaningful philosophical work can be done. To provide a guiding example, this talk sets out an abstract framework for creating an artificial system with moral agency based on dynamic systems theory. The framework itself is speculative but it is plausible and, more importantly, concrete enough to ground philosophical work in. Using that design, philosophical questions are posed directly relevant to that system. Mechanisms by which different people with different value systems might go about answering these questions will be outlined. Finally, this talk is not completely abstract but, in fact, directly informs the speaker’s approach to writing ISO AI standards on ethically building artificially intelligent systems. It concludes with an introduction to how those standards are taking shape and how people interested in the topic can contribute.
Due to some unforeseen circumstances we regretfully have to cancel this event. We hope to schedule this talk at a later date.
We must separate the technical endeavor of imbuing artificial systems with moral agency from philosophical questions about the ethics of doing so. Both are important ventures but they are distinct and the nature of the latter is dependent on the nature of the former. Failure to separate these two might lead to much wasted time reasoning about the ethics of technologies which will never or could never be created. However, in the context of a specific system engaged in artificial ethics, meaningful philosophical work can be done. To provide a guiding example, this talk sets out an abstract framework for creating an artificial system with moral agency based on dynamic systems theory. The framework itself is speculative but it is plausible and, more importantly, concrete enough to ground philosophical work in. Using that design, philosophical questions are posed directly relevant to that system. Mechanisms by which different people with different value systems might go about answering these questions will be outlined. Finally, this talk is not completely abstract but, in fact, directly informs the speaker’s approach to writing ISO AI standards on ethically building artificially intelligent systems. It concludes with an introduction to how those standards are taking shape and how people interested in the topic can contribute.
Mykel Kochenderfer, Assistant Professor of Aeronautics and Astronautics and Assistant Professor, by courtesy, of Computer Science at Stanford University
Mykel is Assistant Professor of Aeronautics and Astronautics and Assistant Professor, by courtesy, of Computer Science at Stanford University. He is the director of the Stanford Intelligent Systems Laboratory (SISL), conducting research on advanced algorithms and analytical methods for the design of robust decision making systems. Of particular interest are systems for air traffic control, unmanned aircraft, and automated driving where decisions must be made in uncertain, dynamic environments while maintaining safety and efficiency. Research at SISL focuses on efficient computational methods for deriving optimal decision strategies from high-dimensional, probabilistic problem representations. Prior to joining the faculty in 2013, he was at MIT Lincoln Laboratory where he worked on airspace modeling and aircraft collision avoidance. He received his Ph.D. from the University of Edinburgh in 2006 where he studied at the Institute of Perception, Action and Behaviour in the School of Informatics. He received B.S. and M.S. degrees in computer science from Stanford University in 2003. Prof. Kochenderfer is the director SAIL-Toyota Center for AI Research and a co-director of the Center for AI Safety. He is affiliated with the Stanford Artificial Intelligence Laboratory (SAIL), the Human-Centered AI (HAI) Institute, the Symbolic Systems Program, the Bio-X Institute, Wu Tsai Neurosciences Institute, and the Center for Automotive Research at Stanford (CARS). In 2017, he was awarded the DARPA Young Faculty Award. He is an associate editor of the Journal of Artificial Intelligence Research and the Journal of Aerospace Information Systems. He is an author of the textbooks Decision Making under Uncertainty: Theory and Application (MIT Press, 2015) and Algorithms for Optimization (MIT Press, 2019). He is a third-generation pilot.Bryan Casey, Legal Fellow at the Center for Automotive Research at Stanford University Bryan Casey is a Legal Fellow at the Center for Automotive Research at Stanford, a Lecturer at Stanford Law School, and an affiliate scholar at the Stanford Machine Leaning Group, CodeX: The Center for Legal Informatics, and the Transatlantic Technology Law Forum. His research covers a broad range of issues at the intersection of law and emerging artificial intelligence technologies—particularly those involving transportation systems. He was written extensively on the legal implications of machine decision making, algorithmic explanability, and the role of lawyers as gatekeepers overseeing the deployment of AI-embedded products. Bryan’s scholarship has appeared in Northwestern University Law Review, Berkeley Technology Law Journal, and Stanford Law Review Online, among other journals. He also regularly comments in media outlets including CNN, Wired Magazine, Futurism, and The Stanford Lawyer. His recent work focuses on the competing roles of legality, morality, and profit-maximization in commercial AI systems with significant social impacts. And his 2018-2019 course offerings at Stanford Law School include The Future of Algorithms and Lawyering for Innovation: Artificial Intelligence. Clark Barrett, Associate Professor (Research) of Computer Science, Stanford University Clark Barrett joined Stanford University as an Associate Professor (Research) of Computer Science in September 2016. Before that, he was an Associate Professor of Computer Science at the Courant Institute of Mathematical Sciences at New York University. His expertise is in constraint solving and its applications to system verification and security. His PhD dissertation introduced a novel approach to constraint solving now known as Satisfiability Modulo Theories (SMT). Today, he is recognized as one of the world's experts in the development and application of SMT techniques. He was also an early pioneer in the development of formal hardware verification: at Intel, he collaborated on a novel theorem prover used to verify key microprocessor properties; and at 0-in Design Automation (now part of Mentor Graphics), he helped build one of the first industrially successful assertion-based verification tool-sets for hardware. He is an ACM Distinguished Scientist. Chris Gerdes, Professor of Mechanical Engineering, Director of the Center for Automotive Research (CARS), and Director of the Revs Program, Stanford University Chris studies how cars move, how humans drive cars and how to design future cars that work cooperatively with the driver or drive themselves. When not teaching on campus, he can often be found at the racetrack with students, instrumenting historic race cars or trying out their latest prototypes for the future. Vehicles in the lab include X1, an entirely student-built test vehicle, and Shelley, an Audi TT-S capable of turning a competitive lap time around the track without a human driver. Professor Gerdes and his team have been recognized with a number of awards including the Presidential Early Career Award for Scientists and Engineers, the Ralph Teetor award from SAE International and the Rudolf Kalman Award from the American Society of Mechanical Engineers.Mykel Kochenderfer, Assistant Professor of Aeronautics and Astronautics and Assistant Professor, by courtesy, of Computer Science at Stanford University
Mykel is Assistant Professor of Aeronautics and Astronautics and Assistant Professor, by courtesy, of Computer Science at Stanford University. He is the director of the Stanford Intelligent Systems Laboratory (SISL), conducting research on advanced algorithms and analytical methods for the design of robust decision making systems. Of particular interest are systems for air traffic control, unmanned aircraft, and automated driving where decisions must be made in uncertain, dynamic environments while maintaining safety and efficiency. Research at SISL focuses on efficient computational methods for deriving optimal decision strategies from high-dimensional, probabilistic problem representations. Prior to joining the faculty in 2013, he was at MIT Lincoln Laboratory where he worked on airspace modeling and aircraft collision avoidance. He received his Ph.D. from the University of Edinburgh in 2006 where he studied at the Institute of Perception, Action and Behaviour in the School of Informatics. He received B.S. and M.S. degrees in computer science from Stanford University in 2003. Prof. Kochenderfer is the director SAIL-Toyota Center for AI Research and a co-director of the Center for AI Safety. He is affiliated with the Stanford Artificial Intelligence Laboratory (SAIL), the Human-Centered AI (HAI) Institute, the Symbolic Systems Program, the Bio-X Institute, Wu Tsai Neurosciences Institute, and the Center for Automotive Research at Stanford (CARS). In 2017, he was awarded the DARPA Young Faculty Award. He is an associate editor of the Journal of Artificial Intelligence Research and the Journal of Aerospace Information Systems. He is an author of the textbooks Decision Making under Uncertainty: Theory and Application (MIT Press, 2015) and Algorithms for Optimization (MIT Press, 2019). He is a third-generation pilot.Bryan Casey, Legal Fellow at the Center for Automotive Research at Stanford University Bryan Casey is a Legal Fellow at the Center for Automotive Research at Stanford, a Lecturer at Stanford Law School, and an affiliate scholar at the Stanford Machine Leaning Group, CodeX: The Center for Legal Informatics, and the Transatlantic Technology Law Forum. His research covers a broad range of issues at the intersection of law and emerging artificial intelligence technologies—particularly those involving transportation systems. He was written extensively on the legal implications of machine decision making, algorithmic explanability, and the role of lawyers as gatekeepers overseeing the deployment of AI-embedded products. Bryan’s scholarship has appeared in Northwestern University Law Review, Berkeley Technology Law Journal, and Stanford Law Review Online, among other journals. He also regularly comments in media outlets including CNN, Wired Magazine, Futurism, and The Stanford Lawyer. His recent work focuses on the competing roles of legality, morality, and profit-maximization in commercial AI systems with significant social impacts. And his 2018-2019 course offerings at Stanford Law School include The Future of Algorithms and Lawyering for Innovation: Artificial Intelligence. Clark Barrett, Associate Professor (Research) of Computer Science, Stanford University Clark Barrett joined Stanford University as an Associate Professor (Research) of Computer Science in September 2016. Before that, he was an Associate Professor of Computer Science at the Courant Institute of Mathematical Sciences at New York University. His expertise is in constraint solving and its applications to system verification and security. His PhD dissertation introduced a novel approach to constraint solving now known as Satisfiability Modulo Theories (SMT). Today, he is recognized as one of the world's experts in the development and application of SMT techniques. He was also an early pioneer in the development of formal hardware verification: at Intel, he collaborated on a novel theorem prover used to verify key microprocessor properties; and at 0-in Design Automation (now part of Mentor Graphics), he helped build one of the first industrially successful assertion-based verification tool-sets for hardware. He is an ACM Distinguished Scientist. Chris Gerdes, Professor of Mechanical Engineering, Director of the Center for Automotive Research (CARS), and Director of the Revs Program, Stanford University Chris studies how cars move, how humans drive cars and how to design future cars that work cooperatively with the driver or drive themselves. When not teaching on campus, he can often be found at the racetrack with students, instrumenting historic race cars or trying out their latest prototypes for the future. Vehicles in the lab include X1, an entirely student-built test vehicle, and Shelley, an Audi TT-S capable of turning a competitive lap time around the track without a human driver. Professor Gerdes and his team have been recognized with a number of awards including the Presidential Early Career Award for Scientists and Engineers, the Ralph Teetor award from SAE International and the Rudolf Kalman Award from the American Society of Mechanical Engineers.Heejae Lim, Founder and CEO, TalkingPoints - TalkingPoints drives student success in low-income, diverse areas through AI-enabled two-way translated communication and personalized coaching content that guides parents' engagement with teachers and at home with their children, thereby building strong partnerships across families, schools, and communities.
Grace Mitchell, Data Analyst, WattTime - WattTime is a nonprofit that offers technology solutions that make it easy for anyone to achieve emissions reductions without compromising cost, comfort, and function.
Nick Hobbs, Senior Data Scientist, The Trevor Project.org - The Trevor Project saves lives by supporting at-risk LGBTQ youth via phone, text, and chat. Using natural language processing and sentiment analysis, counselors will be able to determine a LGBTQ youth’s suicide risk level, and better tailor services for individuals seeking help.
Mollie Javerbaum, Google.org - The Google AI Impact Challenge was an open call to organizations to submit their ideas on how AI could help address societal challenges. Out of more than 2,600 proposals from 119 countries, Google selected 20 organizations to support with a total of $25M in grant funding from Google.org, coaching from Google’s AI experts, credit and consulting from Google Cloud, and inclusion in a custom accelerator program.
Heejae Lim, Founder and CEO, TalkingPoints - TalkingPoints drives student success in low-income, diverse areas through AI-enabled two-way translated communication and personalized coaching content that guides parents' engagement with teachers and at home with their children, thereby building strong partnerships across families, schools, and communities.
Grace Mitchell, Data Analyst, WattTime - WattTime is a nonprofit that offers technology solutions that make it easy for anyone to achieve emissions reductions without compromising cost, comfort, and function.
Nick Hobbs, Senior Data Scientist, The Trevor Project.org - The Trevor Project saves lives by supporting at-risk LGBTQ youth via phone, text, and chat. Using natural language processing and sentiment analysis, counselors will be able to determine a LGBTQ youth’s suicide risk level, and better tailor services for individuals seeking help.
Mollie Javerbaum, Google.org - The Google AI Impact Challenge was an open call to organizations to submit their ideas on how AI could help address societal challenges. Out of more than 2,600 proposals from 119 countries, Google selected 20 organizations to support with a total of $25M in grant funding from Google.org, coaching from Google’s AI experts, credit and consulting from Google Cloud, and inclusion in a custom accelerator program.
Most deep learning networks today rely on dense representations. This is in stark contrast to our brains which are extremely sparse. In this talk, Subutai will first discuss what is known about the sparsity of activations and connectivity in the neocortex. He will also summarize new experimental data around active dendrites, branch-specific plasticity, and structural plasticity, each of which has surprising implications for how we think about sparsity. In the second half of the talk, Subutai will discuss how these insights from the brain can be applied to practical machine learning applications. He will show how sparse representations can give rise to improved robustness, continuous learning, powerful unsupervised learning rules, and improved computational efficiency.
Most deep learning networks today rely on dense representations. This is in stark contrast to our brains which are extremely sparse. In this talk, Subutai will first discuss what is known about the sparsity of activations and connectivity in the neocortex. He will also summarize new experimental data around active dendrites, branch-specific plasticity, and structural plasticity, each of which has surprising implications for how we think about sparsity. In the second half of the talk, Subutai will discuss how these insights from the brain can be applied to practical machine learning applications. He will show how sparse representations can give rise to improved robustness, continuous learning, powerful unsupervised learning rules, and improved computational efficiency.
Disruptive new technologies are often heralded for their power to transform industries, increase efficiency, and improve lives. However, emerging technologies such as artificial intelligence and quantum computing don’t just disrupt industries: they disrupt the workforce. Technological innovation creates new jobs and transforms existing roles, resulting in hybrid jobs that fuse skills from disparate fields in unfamiliar ways. Research from Burning Glass finds that over 250 occupations are now highly hybridized, accounting for 1 in 8 job openings. Moreover, disciplines born in the digital age such as data analytics, programming, and cybersecurity are spreading across the economy, forcing firms, training providers, and individuals to keep pace with a dizzying array of new skillsets to manage rapid digital transformation. This seminar will explore Burning Glass’s research on emerging technologies and their impact on the job market, discuss the new foundational skill sets needed in a digital economy, and consider the implications for firms, training providers, policymakers, and individuals as disruptive new technologies introduce new skill needs and rewrite the DNA of the workforce.
Disruptive new technologies are often heralded for their power to transform industries, increase efficiency, and improve lives. However, emerging technologies such as artificial intelligence and quantum computing don’t just disrupt industries: they disrupt the workforce. Technological innovation creates new jobs and transforms existing roles, resulting in hybrid jobs that fuse skills from disparate fields in unfamiliar ways. Research from Burning Glass finds that over 250 occupations are now highly hybridized, accounting for 1 in 8 job openings. Moreover, disciplines born in the digital age such as data analytics, programming, and cybersecurity are spreading across the economy, forcing firms, training providers, and individuals to keep pace with a dizzying array of new skillsets to manage rapid digital transformation. This seminar will explore Burning Glass’s research on emerging technologies and their impact on the job market, discuss the new foundational skill sets needed in a digital economy, and consider the implications for firms, training providers, policymakers, and individuals as disruptive new technologies introduce new skill needs and rewrite the DNA of the workforce.
Rob Reich, Associate Director, HAI
Rob is professor of political science and, by courtesy, professor of philosophy and at the Graduate School of Education, at Stanford University. He is the director of the Center for Ethics in Society and faculty co-director of the Center on Philanthropy and Civil Society (publisher of the Stanford Social Innovation Review), both at Stanford University. He is also associate director of the Institute on Human-Centered Artificial Intelligence.
He is the author or editor of several books on education and a book on the relationship between philanthropy, democracy, and justice: Just Giving: Why Philanthropy is Failing Democracy and How It Can Do Better (Princeton University Press 2018) and Philanthropy in Democratic Societies (edited with Chiara Cordelli and Lucy Bernholz). His current work focuses on ethics and technology, and he is editing a new volume called Digital Technology and Democracy (with Lucy Bernholz and Helene Landemore). He is the recipient of multiple teaching awards, including the Phi Beta Kappa Undergraduate Teaching Award and the Walter J. Gores Award, Stanford University. He is currently a University Fellow in Undergraduate Education at Stanford. He is a board member of the Spencer Foundation and the magazine Boston Review. Kate Vredenburgh, HAI-EIS Fellow Kate received Ph.D. in philosophy from Harvard University. She works mainly on questions in the philosophy of social science and political philosophy. The overarching motivation guiding her research is to understand how background commitments influence modeling in the social sciences and computer science, to reflect on how they should, and to build fairer models on that basis. She also works on political and ethical questions inspired by the use of technology and social science by corporations and by governments. For example, Kate is currently working on a project arguing for a right to explanation, inspired by recent discussions surrounding the EU's General Data Protection Regulation (GDPR) and interpretability in computer science. Kate will join the Center for Ethics as an interdisciplinary ethics fellow in partnership with the Stanford Institute for Human-Centered Artificial Intelligence. Todd Karhu, HAI-EIS Fellow Todd received his Ph.D. in philosophy from the London School of Economics. Before LSE, he completed an M.Phil. in political theory at Oxford University. His doctoral dissertation focuses on theoretical and practical issues in the ethics of killing, and a few other normative matters involving death. On the theoretical side, he has worked on the relationship between the wrongness of killing and the badness of death and about how killing and dying relate to the metaphysics of time. On the more practical side, he has worked on the question of the extent of one's right to self-defense in the context of war and the moral duties people incur in virtue of killing others.Rob Reich, Associate Director, HAI
Rob is professor of political science and, by courtesy, professor of philosophy and at the Graduate School of Education, at Stanford University. He is the director of the Center for Ethics in Society and faculty co-director of the Center on Philanthropy and Civil Society (publisher of the Stanford Social Innovation Review), both at Stanford University. He is also associate director of the Institute on Human-Centered Artificial Intelligence.
He is the author or editor of several books on education and a book on the relationship between philanthropy, democracy, and justice: Just Giving: Why Philanthropy is Failing Democracy and How It Can Do Better (Princeton University Press 2018) and Philanthropy in Democratic Societies (edited with Chiara Cordelli and Lucy Bernholz). His current work focuses on ethics and technology, and he is editing a new volume called Digital Technology and Democracy (with Lucy Bernholz and Helene Landemore). He is the recipient of multiple teaching awards, including the Phi Beta Kappa Undergraduate Teaching Award and the Walter J. Gores Award, Stanford University. He is currently a University Fellow in Undergraduate Education at Stanford. He is a board member of the Spencer Foundation and the magazine Boston Review. Kate Vredenburgh, HAI-EIS Fellow Kate received Ph.D. in philosophy from Harvard University. She works mainly on questions in the philosophy of social science and political philosophy. The overarching motivation guiding her research is to understand how background commitments influence modeling in the social sciences and computer science, to reflect on how they should, and to build fairer models on that basis. She also works on political and ethical questions inspired by the use of technology and social science by corporations and by governments. For example, Kate is currently working on a project arguing for a right to explanation, inspired by recent discussions surrounding the EU's General Data Protection Regulation (GDPR) and interpretability in computer science. Kate will join the Center for Ethics as an interdisciplinary ethics fellow in partnership with the Stanford Institute for Human-Centered Artificial Intelligence. Todd Karhu, HAI-EIS Fellow Todd received his Ph.D. in philosophy from the London School of Economics. Before LSE, he completed an M.Phil. in political theory at Oxford University. His doctoral dissertation focuses on theoretical and practical issues in the ethics of killing, and a few other normative matters involving death. On the theoretical side, he has worked on the relationship between the wrongness of killing and the badness of death and about how killing and dying relate to the metaphysics of time. On the more practical side, he has worked on the question of the extent of one's right to self-defense in the context of war and the moral duties people incur in virtue of killing others.
This workshop focused on “Uncertainty in AI Situations” asks researchers to consider what
an AI can do when faced with uncertainty. Machine learning algorithms whose
classifications rely on posterior probabilities of membership often present ambiguous
results, where due to unavailable training data or ambiguous cases, the likelihood of any
outcome is approximately even. In such situations, the human programmers must decide
how the machine handles ambiguity: whether making a “best-fit” classification or reporting
potential error, there is always a potential conflict between the mathematical rigor of the
model and the ambiguity of real-world use cases.
Some questions asked that begin the process of advancing AI to a new intellectual understanding of the trickiest problems in the machine-learning environment.
• How do researchers create training sets that engage with uncertainty, particularly
when deciding between reflecting real-world data and curating data sets to avoid
bias?
• How can we frame ontologies, typologies, and epistemologies that can account for,
and help solve, ambiguity in data and indecision in AI?
This workshop focused on “Uncertainty in AI Situations” asks researchers to consider what
an AI can do when faced with uncertainty. Machine learning algorithms whose
classifications rely on posterior probabilities of membership often present ambiguous
results, where due to unavailable training data or ambiguous cases, the likelihood of any
outcome is approximately even. In such situations, the human programmers must decide
how the machine handles ambiguity: whether making a “best-fit” classification or reporting
potential error, there is always a potential conflict between the mathematical rigor of the
model and the ambiguity of real-world use cases.
Some questions asked that begin the process of advancing AI to a new intellectual understanding of the trickiest problems in the machine-learning environment.
• How do researchers create training sets that engage with uncertainty, particularly
when deciding between reflecting real-world data and curating data sets to avoid
bias?
• How can we frame ontologies, typologies, and epistemologies that can account for,
and help solve, ambiguity in data and indecision in AI?
The Impact of Artificial Intelligence on the Labor Market
Michael developed a new method to predict the impacts of technology on occupations. He used the overlap between the text of job task descriptions and the text of patents to construct a measure of the exposure of tasks to automation. He first applied the method to historical cases such as software and industrial robots.
The Impact of Artificial Intelligence on the Labor Market
Michael developed a new method to predict the impacts of technology on occupations. He used the overlap between the text of job task descriptions and the text of patents to construct a measure of the exposure of tasks to automation. He first applied the method to historical cases such as software and industrial robots.
Van Ton-Quinlivan is a nationally recognized thought leader in workforce development, quoted in The New York Times, Chronicle of Higher Education, Stanford Social Innovation Review, U.S. News & World Report, and other publications. Her career spans the public, private, and non-profit sectors. Most recently, she served as executive vice chancellor for the nation’s largest system of higher education – the California Community Colleges -- and grew public investment in workforce programs from $100M to over $1B during her tenure. Her talk outline current higher education reforms, present provocations on how the future of work may unfold, and highlight where our social structures must evolve for the workforce development challenges ahead. Follow her @WorkforceVan.
Van Ton-Quinlivan is a nationally recognized thought leader in workforce development, quoted in The New York Times, Chronicle of Higher Education, Stanford Social Innovation Review, U.S. News & World Report, and other publications. Her career spans the public, private, and non-profit sectors. Most recently, she served as executive vice chancellor for the nation’s largest system of higher education – the California Community Colleges -- and grew public investment in workforce programs from $100M to over $1B during her tenure. Her talk outline current higher education reforms, present provocations on how the future of work may unfold, and highlight where our social structures must evolve for the workforce development challenges ahead. Follow her @WorkforceVan.
Occupant-Favoring Autonomous Vehicles
Good news! The near future has arrived and you’re ready to purchase your first fully autonomous vehicle. You have narrowed down your search to a few manufacturers and have just one decision left to make: How would you like your vehicle to respond if it finds itself in a potential collision with other autonomous vehicles? If, like most people, you care more about your own safety and that of your friends and family than you care about the safety of strangers on the road, you will understandably be drawn to a vehicle that is programmed to be, at least to some degree, occupant-favoring. Such a vehicle would tend to select courses of action that reduce harm to its own passengers in a crash, even when doing so means that a greater harm will befall the occupants of other vehicles. Because most consumers are like you, occupant-favoring vehicles will soon dominate the market if they are not regulated. In this talk, which draws on a joint project with Tomi Francis (Oxford), Todd Karhu will discuss reasons for and against a regulatory ban on occupant-favoring vehicles, including the possibility that if no passengers are allowed to operate occupant-favoring vehicles, every passenger will be safer than if all do.
Occupant-Favoring Autonomous Vehicles
Good news! The near future has arrived and you’re ready to purchase your first fully autonomous vehicle. You have narrowed down your search to a few manufacturers and have just one decision left to make: How would you like your vehicle to respond if it finds itself in a potential collision with other autonomous vehicles? If, like most people, you care more about your own safety and that of your friends and family than you care about the safety of strangers on the road, you will understandably be drawn to a vehicle that is programmed to be, at least to some degree, occupant-favoring. Such a vehicle would tend to select courses of action that reduce harm to its own passengers in a crash, even when doing so means that a greater harm will befall the occupants of other vehicles. Because most consumers are like you, occupant-favoring vehicles will soon dominate the market if they are not regulated. In this talk, which draws on a joint project with Tomi Francis (Oxford), Todd Karhu will discuss reasons for and against a regulatory ban on occupant-favoring vehicles, including the possibility that if no passengers are allowed to operate occupant-favoring vehicles, every passenger will be safer than if all do.
POSTPONED - This talk has been postponed due to unavoidable circumstances. Please check back later for a reschedule date.
This talk will explore the findings within Advancing Racial Justice in Tech and argue that ethics alone will not get us to a human centered approach to design, deployment and regulation of advanced technological systems.
Mutale Nkonde is the founding Executive Director of AI For the People, a non profit that seeks to use popular culture to educate Black audiences about the social justice implications of the deployment of AI systems in public life. Prior to this, she worked as an AI Policy advisor and was part of the team that introduced the Algorithmic and Deep Fakes Accountability Acts, as well as the No Biometric Barriers to Housing Act to the House.
A previous talk in May 2019 titled Advancing Racial Literacy in Tech made the case that it will take more than ethics to create a more human centered computing industry. Mutale Nkonde is extending this work as a fellow at the Berkman Klein Center of Internet and Society at Harvard University.
Learn more about Mutale Nkonde’s work at https://www.mutale.tech/
POSTPONED - This talk has been postponed due to unavoidable circumstances. Please check back later for a reschedule date.
This talk will explore the findings within Advancing Racial Justice in Tech and argue that ethics alone will not get us to a human centered approach to design, deployment and regulation of advanced technological systems.
Mutale Nkonde is the founding Executive Director of AI For the People, a non profit that seeks to use popular culture to educate Black audiences about the social justice implications of the deployment of AI systems in public life. Prior to this, she worked as an AI Policy advisor and was part of the team that introduced the Algorithmic and Deep Fakes Accountability Acts, as well as the No Biometric Barriers to Housing Act to the House.
A previous talk in May 2019 titled Advancing Racial Literacy in Tech made the case that it will take more than ethics to create a more human centered computing industry. Mutale Nkonde is extending this work as a fellow at the Berkman Klein Center of Internet and Society at Harvard University.
Learn more about Mutale Nkonde’s work at https://www.mutale.tech/
We will describe the Stanford Medicine Program for AI in Healthcare, which aims to bring AI into clinical use, safely and ethically. The session will begin with an overview of the effort and then focus on describing a project to improve palliative care using machine learning. We will summarize the creation and validation of a mortality prediction model, describe the associated care planning workflow it triggers and the work constraints it needs to function under. We will present preliminary results on an HAI supported project for understanding and addressing ethical challenges with implementation of machine learning to advance palliative care. Using this real-life example, we will elucidate several of the ethical challenges that need to be studied and addressed when combining artificial intelligence technologies with medical expertise to help doctors make faster, more informed and humane decisions.
Dr. Shah's research focuses on combining machine learning and prior knowledge in medical ontologies to enable use cases of the learning health system. Shah received the AMIA New Investigator Award for 2013 and the Stanford Biosciences Faculty Teaching Award for outstanding teaching in his graduate class on “Data driven medicine”. Dr. Shah was elected into the American College of Medical Informatics (ACMI) in 2015 and is inducted into the American Society for Clinical Investigation (ASCI) in 2016. He holds an MBBS from Baroda Medical College, India, a PhD from Penn State University and completed postdoctoral training at Stanford University.
Dr. Char's K01 from NHGRI examines the ethical challenges of implementing whole genome sequencing in the care of critically ill children, particularly those with congenital cardiac disease. His long-term goal is to continue to identify and address ethical concerns associated with the implementation of next generation technologies to bedside clinical care, like whole genome sequencing and its attendant technologies like machine learning.
Ron grew up in New York, went to college and medical school in Chicago at Northwestern, and completed a one year clinical epidemiology research fellowship at Penn before coming to Stanford with his wife for Internal Medicine training. Ron's informatics interests are to work with other clinicians, informaticists, and designers to create, implement, evaluate, and disseminate tools that both help us become better physicians and build a learning healthcare system of the future.
Dr. Harman graduated from Case Western Reserve University School of Medicine. She then completed a residency in Internal Medicine at Stanford and a Palliative Care fellowship at the Palo Alto VA/Stanford program before joining the faculty at Stanford. She is the founding medical director of Palliative Care Services for Stanford Health Care and a 2017 Cambia Sojourns Scholar Leader Awardee. She is a Clinical Associate Professor in the Department of Medicine and a faculty member in the Stanford Center for Biomedical Ethics. She serves as the clinical section chief of Palliative Care in the Division of Primary Care and Population Health and co-chairs the Stanford Health Care Ethics Committee. Her research and educational interests include communication training in healthcare, bioethics in end-of-life care, and the application of machine learning to improve access to palliative care.
We will describe the Stanford Medicine Program for AI in Healthcare, which aims to bring AI into clinical use, safely and ethically. The session will begin with an overview of the effort and then focus on describing a project to improve palliative care using machine learning. We will summarize the creation and validation of a mortality prediction model, describe the associated care planning workflow it triggers and the work constraints it needs to function under. We will present preliminary results on an HAI supported project for understanding and addressing ethical challenges with implementation of machine learning to advance palliative care. Using this real-life example, we will elucidate several of the ethical challenges that need to be studied and addressed when combining artificial intelligence technologies with medical expertise to help doctors make faster, more informed and humane decisions.
Dr. Shah's research focuses on combining machine learning and prior knowledge in medical ontologies to enable use cases of the learning health system. Shah received the AMIA New Investigator Award for 2013 and the Stanford Biosciences Faculty Teaching Award for outstanding teaching in his graduate class on “Data driven medicine”. Dr. Shah was elected into the American College of Medical Informatics (ACMI) in 2015 and is inducted into the American Society for Clinical Investigation (ASCI) in 2016. He holds an MBBS from Baroda Medical College, India, a PhD from Penn State University and completed postdoctoral training at Stanford University.
Dr. Char's K01 from NHGRI examines the ethical challenges of implementing whole genome sequencing in the care of critically ill children, particularly those with congenital cardiac disease. His long-term goal is to continue to identify and address ethical concerns associated with the implementation of next generation technologies to bedside clinical care, like whole genome sequencing and its attendant technologies like machine learning.
Ron grew up in New York, went to college and medical school in Chicago at Northwestern, and completed a one year clinical epidemiology research fellowship at Penn before coming to Stanford with his wife for Internal Medicine training. Ron's informatics interests are to work with other clinicians, informaticists, and designers to create, implement, evaluate, and disseminate tools that both help us become better physicians and build a learning healthcare system of the future.
Dr. Harman graduated from Case Western Reserve University School of Medicine. She then completed a residency in Internal Medicine at Stanford and a Palliative Care fellowship at the Palo Alto VA/Stanford program before joining the faculty at Stanford. She is the founding medical director of Palliative Care Services for Stanford Health Care and a 2017 Cambia Sojourns Scholar Leader Awardee. She is a Clinical Associate Professor in the Department of Medicine and a faculty member in the Stanford Center for Biomedical Ethics. She serves as the clinical section chief of Palliative Care in the Division of Primary Care and Population Health and co-chairs the Stanford Health Care Ethics Committee. Her research and educational interests include communication training in healthcare, bioethics in end-of-life care, and the application of machine learning to improve access to palliative care.
This conference is anchored and building on the release of the Special National Academy of Medicine (NAM) publication titled: “Artificial Intelligence in Healthcare: The Hope, The Hype, The Promise, The Peril.” Co-led by Michael Matheny and Sonoo Thadaney Israni.
Objectives At the conclusion of this activity, participants should be able to:Evaluate AI in the healthcare landscape
Critically assess the opportunities for AI in healthcare
Develop appropriate criteria for evaluating/deploying AI solutions
Build frameworks for creating and testing AI healthcare solutions
This conference is anchored and building on the release of the Special National Academy of Medicine (NAM) publication titled: “Artificial Intelligence in Healthcare: The Hope, The Hype, The Promise, The Peril.” Co-led by Michael Matheny and Sonoo Thadaney Israni.
Objectives At the conclusion of this activity, participants should be able to:Evaluate AI in the healthcare landscape
Critically assess the opportunities for AI in healthcare
Develop appropriate criteria for evaluating/deploying AI solutions
Build frameworks for creating and testing AI healthcare solutions
Emeritus Distinguished University Professor in the Department of Computer Science, Founding Director (1983-2000) of the Human-Computer Interaction Laboratory (http://hcil.umd.edu), and a Member of the UM Institute for Advanced Computer Studies (UMIACS) at the University of Maryland. He is a Fellow of the AAAS, ACM, IEEE, and NAI, and a Member of the National Academy of Engineering, in recognition of his pioneering contributions to human-computer interaction and information visualization. His widely-used contributions include the clickable highlighted web-links, high-precision touchscreen keyboards for mobile devices, and tagging for photos. Shneiderman’s information visualization innovations include dynamic query sliders for Spotfire, development of treemaps for viewing hierarchical data, novel network visualizations for NodeXL, and event sequence analysis for electronic health records.
Ben is the co-author with Catherine Plaisant of Designing the User Interface: Strategies for Effective Human-Computer Interaction (6th ed., 2016). He co-authored Readings in Information Visualization: Using Vision to Think (1999) and Analyzing Social Media Networks with NodeXL (2nd edition, 2019). His book Leonardo’s Laptop (MIT Press) won the IEEE book award for Distinguished Literary Contribution. The New ABCs of Research: Achieving Breakthrough Collaborations (Oxford, 2016) describes how research can produce higher impacts.
The next generation of user experiences will produce 1000-fold improvements in human capabilities. This new tools will amplify, augment, enhance, and empower people, just as the Web, email, search, navigation, digital photography, and many other applications have already done. Rather than emphasize autonomous machines and humanoid robots as team partners, these new tools will produce comprehensible, predictable, and controllable applications that promote self-efficacy, human responsibility, and social participation at scale. The goal is to ensure human control, while increasing the level of automation.
Improved designs that produce trusted, reliable, and safe (TRS) systems will build on successful direct manipulation guidelines that provide a visual display of the objects of interest, rapid, incremental, and reversible operations, with informative feedback for every user action. Elevators, thermostats, airbags, text messaging systems, and the 737 MAX provide positive and negative lessons, in charting the landscape of autonomy and control. Design guidelines and independent oversight mechanisms for prospective design reviews and retrospective analyses of failures will clarify the role of human responsibility, even as automation increases.
Emeritus Distinguished University Professor in the Department of Computer Science, Founding Director (1983-2000) of the Human-Computer Interaction Laboratory (http://hcil.umd.edu), and a Member of the UM Institute for Advanced Computer Studies (UMIACS) at the University of Maryland. He is a Fellow of the AAAS, ACM, IEEE, and NAI, and a Member of the National Academy of Engineering, in recognition of his pioneering contributions to human-computer interaction and information visualization. His widely-used contributions include the clickable highlighted web-links, high-precision touchscreen keyboards for mobile devices, and tagging for photos. Shneiderman’s information visualization innovations include dynamic query sliders for Spotfire, development of treemaps for viewing hierarchical data, novel network visualizations for NodeXL, and event sequence analysis for electronic health records.
Ben is the co-author with Catherine Plaisant of Designing the User Interface: Strategies for Effective Human-Computer Interaction (6th ed., 2016). He co-authored Readings in Information Visualization: Using Vision to Think (1999) and Analyzing Social Media Networks with NodeXL (2nd edition, 2019). His book Leonardo’s Laptop (MIT Press) won the IEEE book award for Distinguished Literary Contribution. The New ABCs of Research: Achieving Breakthrough Collaborations (Oxford, 2016) describes how research can produce higher impacts.
The next generation of user experiences will produce 1000-fold improvements in human capabilities. This new tools will amplify, augment, enhance, and empower people, just as the Web, email, search, navigation, digital photography, and many other applications have already done. Rather than emphasize autonomous machines and humanoid robots as team partners, these new tools will produce comprehensible, predictable, and controllable applications that promote self-efficacy, human responsibility, and social participation at scale. The goal is to ensure human control, while increasing the level of automation.
Improved designs that produce trusted, reliable, and safe (TRS) systems will build on successful direct manipulation guidelines that provide a visual display of the objects of interest, rapid, incremental, and reversible operations, with informative feedback for every user action. Elevators, thermostats, airbags, text messaging systems, and the 737 MAX provide positive and negative lessons, in charting the landscape of autonomy and control. Design guidelines and independent oversight mechanisms for prospective design reviews and retrospective analyses of failures will clarify the role of human responsibility, even as automation increases.
Conversations about ethics and AI are commonplace today, but they are often pitched at a high level of generality or abstraction. In this workshop, we gathered together leading young scholars, chiefly philosophers, to discuss a more detailed research agenda with a particular focus on moral and political philosophy and their intersections with AI. Topics included AI and explainability, AI and value alignment, governance of AI, and more.
Conversations about ethics and AI are commonplace today, but they are often pitched at a high level of generality or abstraction. In this workshop, we gathered together leading young scholars, chiefly philosophers, to discuss a more detailed research agenda with a particular focus on moral and political philosophy and their intersections with AI. Topics included AI and explainability, AI and value alignment, governance of AI, and more.
HAI's October 28-29 conference on AI Ethics, Policy, and Governance at Stanford University will convene experts and leaders from academia, industry, civil society, and government to explore critical and emerging issues related to understanding and guiding AI's human and societal impact. Through plenary discussions, breakout sessions, and workshops we will explore the latest research, delve into case studies, illuminate best practices, and build a global community of research, policy, and practice committed to ensuring that AI benefits humanity.
HAI's October 28-29 conference on AI Ethics, Policy, and Governance at Stanford University will convene experts and leaders from academia, industry, civil society, and government to explore critical and emerging issues related to understanding and guiding AI's human and societal impact. Through plenary discussions, breakout sessions, and workshops we will explore the latest research, delve into case studies, illuminate best practices, and build a global community of research, policy, and practice committed to ensuring that AI benefits humanity.