The HAI Town Hall for Faculty will provide an update on the work of HAI with ample opportunity for Stanford faculty to ask questions. In the 18 months since its inception, HAI has worked to build a community of faculty and researchers across Stanford through grant programs and events. An important goal is to grow that community even further to include collaboration with as many departments and individuals as possible, continually fostering new ideas and research.
The HAI Town Hall for Faculty will provide an update on the work of HAI with ample opportunity for Stanford faculty to ask questions. In the 18 months since its inception, HAI has worked to build a community of faculty and researchers across Stanford through grant programs and events. An important goal is to grow that community even further to include collaboration with as many departments and individuals as possible, continually fostering new ideas and research.
Multiple Dates
September 29, 9-11am PDT
Panel 1: How AI is powering China's Domestic Surveillance State
October 1, 9-11am PDT
Panel 2: The ethics and implications of doing business with China and Chinese companies
October 6, 9-11am PDT
Panel 3: China as an Emerging Global AI Superpower
October 9, 9-11am PDT
Panel 4: How Democracies Should Respond to China's Emergence as an AI Superpower
Multiple Dates
September 29, 9-11am PDT
Panel 1: How AI is powering China's Domestic Surveillance State
October 1, 9-11am PDT
Panel 2: The ethics and implications of doing business with China and Chinese companies
October 6, 9-11am PDT
Panel 3: China as an Emerging Global AI Superpower
October 9, 9-11am PDT
Panel 4: How Democracies Should Respond to China's Emergence as an AI Superpower
This conference will zero in on the latest research on cognitive science, neuroscience, vision, language, and thought, informing the pursuit of artificial intelligence.
Questions to be addressed include:
How can we hope to build an artificial intelligence when we still understand so little about human intelligence?
How can we build a synergistic partnership between cognitive psychology, neuroscience, and artificial intelligence?
Follow the conversation on Twitter with #NeuroHAI, or follow @StanfordHAI for updates.
This conference will zero in on the latest research on cognitive science, neuroscience, vision, language, and thought, informing the pursuit of artificial intelligence.
Questions to be addressed include:
How can we hope to build an artificial intelligence when we still understand so little about human intelligence?
How can we build a synergistic partnership between cognitive psychology, neuroscience, and artificial intelligence?
Follow the conversation on Twitter with #NeuroHAI, or follow @StanfordHAI for updates.
Join the Stanford Institute for Human-Centered Artificial Intelligence (HAI) and Stanford Arts on September 30 at 4pm PDT for a panel discussion of Coded Bias, a documentary film focusing on the ground-breaking research by MIT’s Joy Buolamwini into facial recognition technology’s troubling racial bias.
The panel discussion will include Coded Bias director Shalini Kantayya and HAI co-director and computer vision expert Fei-Fei Li, moderated by HAI associate director and Stanford professor of English Michele Elam.
Join the Stanford Institute for Human-Centered Artificial Intelligence (HAI) and Stanford Arts on September 30 at 4pm PDT for a panel discussion of Coded Bias, a documentary film focusing on the ground-breaking research by MIT’s Joy Buolamwini into facial recognition technology’s troubling racial bias.
The panel discussion will include Coded Bias director Shalini Kantayya and HAI co-director and computer vision expert Fei-Fei Li, moderated by HAI associate director and Stanford professor of English Michele Elam.
A virtual event hosted by the Stanford Institute for Human-Centered Artificial Intelligence (HAI) and the AI Initiative at The Future Society to officially announce a global alliance on the COVID-19 pandemic response. The alliance will provide an information service not yet available that is vitally important to facing and mitigating the crisis.
In parallel with the UN High Level Political Forum, the launch agenda will include details on the partnership and speakers from the private sector, academia, government, and multilateral institutions, including UNESCO, the World Bank, the WHO and UN Global Pulse, offering unique perspectives on the roadblocks and opportunities to addressing the COVID-19 pandemic. Topics covered will include challenges with disparate data and determining meaningful information in the fight against the virus, as well as the importance of building multi-stakeholder collaborations.
A virtual event hosted by the Stanford Institute for Human-Centered Artificial Intelligence (HAI) and the AI Initiative at The Future Society to officially announce a global alliance on the COVID-19 pandemic response. The alliance will provide an information service not yet available that is vitally important to facing and mitigating the crisis.
In parallel with the UN High Level Political Forum, the launch agenda will include details on the partnership and speakers from the private sector, academia, government, and multilateral institutions, including UNESCO, the World Bank, the WHO and UN Global Pulse, offering unique perspectives on the roadblocks and opportunities to addressing the COVID-19 pandemic. Topics covered will include challenges with disparate data and determining meaningful information in the fight against the virus, as well as the importance of building multi-stakeholder collaborations.
The Hoover Institution and the Stanford Institute for Human-Centered Artificial Intelligence (HAI) hosting Ensuring America’s Innovation in Artificial Intelligence with Dr. Condoleezza Rice and Dr. Fei-Fei Li.
Artificial Intelligence (AI) has the potential to radically transform every industry and every society. Such profound changes offer great opportunities to improve the human condition for the better, but also pose unprecedented challenges. As this new era arrives, the creators and designers of AI must account for diversity of thought and ensure systems are built to properly reflect what it means to be human. Guiding the future of AI in a responsible way that translates American values of equality, opportunity and individual freedom will be paramount to ensuring our shared dream of creating a better future for all of humanity.
The Hoover Institution and the Stanford Institute for Human-Centered Artificial Intelligence (HAI) hosting Ensuring America’s Innovation in Artificial Intelligence with Dr. Condoleezza Rice and Dr. Fei-Fei Li.
Artificial Intelligence (AI) has the potential to radically transform every industry and every society. Such profound changes offer great opportunities to improve the human condition for the better, but also pose unprecedented challenges. As this new era arrives, the creators and designers of AI must account for diversity of thought and ensure systems are built to properly reflect what it means to be human. Guiding the future of AI in a responsible way that translates American values of equality, opportunity and individual freedom will be paramount to ensuring our shared dream of creating a better future for all of humanity.
What is the difference between magic and imaginary science in fiction? How can they shed light on the way we understand the real world? Join us as we hear from Ted Chiang, American speculative fiction writer and author of Exhalation and “Story of Your Life,” the short story that was the basis of the 2016 film Arrival. Chiang's work has received four Nebula awards, four Hugo awards, the John W. Campbell Award for Best New Writer, and four Locus awards.
Presented by the Symbolic Systems Program. Co-sponsored by the English Department, Symbolic Systems Society, and Creative Writing Program.
What is the difference between magic and imaginary science in fiction? How can they shed light on the way we understand the real world? Join us as we hear from Ted Chiang, American speculative fiction writer and author of Exhalation and “Story of Your Life,” the short story that was the basis of the 2016 film Arrival. Chiang's work has received four Nebula awards, four Hugo awards, the John W. Campbell Award for Best New Writer, and four Locus awards.
Presented by the Symbolic Systems Program. Co-sponsored by the English Department, Symbolic Systems Society, and Creative Writing Program.
The workshop convened leading academics, computer vision experts, and representatives from civil society, government, and industry to discuss critical questions and develop a whitepaper that makes recommendations related to assessing the performance of facial recognition technology.
Core questions included: What needs to be improved about how we benchmark facial recognition technology tools? How do we close the gap between testing algorithms in the lab and testing products in real world conditions? How do we improve and ground our understanding of this rapidly changing space? What are current best practices? How do we develop them further? What is needed in order to develop consensus standards for this important technology?
This event was by invitation only.
The workshop convened leading academics, computer vision experts, and representatives from civil society, government, and industry to discuss critical questions and develop a whitepaper that makes recommendations related to assessing the performance of facial recognition technology.
Core questions included: What needs to be improved about how we benchmark facial recognition technology tools? How do we close the gap between testing algorithms in the lab and testing products in real world conditions? How do we improve and ground our understanding of this rapidly changing space? What are current best practices? How do we develop them further? What is needed in order to develop consensus standards for this important technology?
This event was by invitation only.
HAI Faculty Associate Director Susan Athey and new incoming HAI senior fellow Erik Brynjolfsson invited researchers working on AI and labor markets across the Stanford community to come together in a virtual event on May 18th, to present and discuss ongoing research and build ties for future collaborations.
How is AI impacting the labor market? How is it changing labor demand and supply, occupations, hiring, labor mobility, firm organization, and behavior? Various groups from different disciplines across campus ranging from Economics, Business, Management Science and Engineering, Politics, Sociology or Computer Science are working on these important questions from different angles, with different lenses and methodologies. The workshop aimed to bring the working group together to enable cross-disciplinary discussions and inspire future research collaborations.
HAI Faculty Associate Director Susan Athey and new incoming HAI senior fellow Erik Brynjolfsson invited researchers working on AI and labor markets across the Stanford community to come together in a virtual event on May 18th, to present and discuss ongoing research and build ties for future collaborations.
How is AI impacting the labor market? How is it changing labor demand and supply, occupations, hiring, labor mobility, firm organization, and behavior? Various groups from different disciplines across campus ranging from Economics, Business, Management Science and Engineering, Politics, Sociology or Computer Science are working on these important questions from different angles, with different lenses and methodologies. The workshop aimed to bring the working group together to enable cross-disciplinary discussions and inspire future research collaborations.
Abstract: Most studies of automation focus on manufacturing or use aggregate data. In one of the first studies of the service sector using establishment-level data, we examine the impact of robot adoption on staffing in nursing homes. This setting is important, because robots are increasingly being adopted in many countries to address the challenges posed by population aging. Japan, in particular, has been actively developing and deploying robots in nursing homes to deal with labor shortages, and since 2015 has subsidized nursing home purchase of robots. Analyzing 2017 data from Japanese nursing homes, we document that facilities that adopt robots are larger, with more functionally-impaired residents, greater numbers of care workers and nurses, many other assistive technologies, better management practices, and located in prefectures with higher planned subsidies for robots per nursing home. However, using variation in those robot subsidies as an instrumental variable, we find that robot adoption has little causal impact on overall staffing or wages, but leads to additional non-regular nurse hours, and increasing turnover of regular care workers. Robot adoption also reduces the wage share, consistent with compositional change in staffing toward non-regular workers. The tight labor market in Japan appears to have prompted nursing homes to adopt a more capital-intensive production process without many detrimental effects on labor. Our results contrast with recent findings that show negative effects of robots on employment and wages in the US manufacturing sector, suggesting that the impact of robots likely differs by labor market conditions and industry.
Abstract: Most studies of automation focus on manufacturing or use aggregate data. In one of the first studies of the service sector using establishment-level data, we examine the impact of robot adoption on staffing in nursing homes. This setting is important, because robots are increasingly being adopted in many countries to address the challenges posed by population aging. Japan, in particular, has been actively developing and deploying robots in nursing homes to deal with labor shortages, and since 2015 has subsidized nursing home purchase of robots. Analyzing 2017 data from Japanese nursing homes, we document that facilities that adopt robots are larger, with more functionally-impaired residents, greater numbers of care workers and nurses, many other assistive technologies, better management practices, and located in prefectures with higher planned subsidies for robots per nursing home. However, using variation in those robot subsidies as an instrumental variable, we find that robot adoption has little causal impact on overall staffing or wages, but leads to additional non-regular nurse hours, and increasing turnover of regular care workers. Robot adoption also reduces the wage share, consistent with compositional change in staffing toward non-regular workers. The tight labor market in Japan appears to have prompted nursing homes to adopt a more capital-intensive production process without many detrimental effects on labor. Our results contrast with recent findings that show negative effects of robots on employment and wages in the US manufacturing sector, suggesting that the impact of robots likely differs by labor market conditions and industry.
Abstract: In this talk I hope to illustrate how AI ethics can avoid the undesirable extremes of two dimensions:
First dimension: Complacency vs Inflation
On the one hand, I will argue that we should eschew the complacent view that AI presents no novel challenges for ethics, the view that AI is just a technology like any other, so extant ethical principles for non-AI technologies are all we need. On the other hand, I will also argue that we should also avoid the inflationary view that the need for new ethical principles for AI derives from the fact that AI systems are themselves moral agents and/or patients.
Second dimension: Reactive systems vs Robots with Obligations
On the one hand, I will argue that the ethical construction of autonomous AI systems (including, but not limited to, autonomous robots such as driverless cars) will require that such systems do more than *merely* transform an input signal to an output signal (as is prevalent in much machine learning technology); at least part of that transformation, to have the right counterfactual richness that ethics requires, will have to have deliberative structure. On the other hand, AI systems that reason about their ethical obligations and what is morally permissible for them are not a solution since AI systems will not, for the foreseeable future, be the kinds of things that could have ethical obligations or moral permissions.
For each of these dimensions, I give a specific example (a policy, and a design, respectively) that avoids the horns of the dilemma, and the moral hazards they entail.
Abstract: In this talk I hope to illustrate how AI ethics can avoid the undesirable extremes of two dimensions:
First dimension: Complacency vs Inflation
On the one hand, I will argue that we should eschew the complacent view that AI presents no novel challenges for ethics, the view that AI is just a technology like any other, so extant ethical principles for non-AI technologies are all we need. On the other hand, I will also argue that we should also avoid the inflationary view that the need for new ethical principles for AI derives from the fact that AI systems are themselves moral agents and/or patients.
Second dimension: Reactive systems vs Robots with Obligations
On the one hand, I will argue that the ethical construction of autonomous AI systems (including, but not limited to, autonomous robots such as driverless cars) will require that such systems do more than *merely* transform an input signal to an output signal (as is prevalent in much machine learning technology); at least part of that transformation, to have the right counterfactual richness that ethics requires, will have to have deliberative structure. On the other hand, AI systems that reason about their ethical obligations and what is morally permissible for them are not a solution since AI systems will not, for the foreseeable future, be the kinds of things that could have ethical obligations or moral permissions.
For each of these dimensions, I give a specific example (a policy, and a design, respectively) that avoids the horns of the dilemma, and the moral hazards they entail.
Abstract: Over the past few years, a series of crises and scandals have resulted in pushback against some of the biggest tech companies from employees within their ranks and from a broader public. This pushback has highlighted a deep concern with the ethical implications of data-driven technologies that have been implicated in anti-democratic practices, automated warfare, the spread of mis- and disinformation, and the perpetuation of racial and gender disparities. In the wake of this pushback, Silicon Valley tech companies have recently begun to invest resources in hiring personnel to design, develop, and manage a portfolio of practices that can adequately address these areas of ethical concern. In this talk we will discuss our recent research, in which we interviewed dozens of these "ethics owners" to understand how they understand their emerging job roles, the conceptual and organizational challenges that stand in their way, and the tensions that emerge from developing ethics practices inside and across their companies.
Abstract: Over the past few years, a series of crises and scandals have resulted in pushback against some of the biggest tech companies from employees within their ranks and from a broader public. This pushback has highlighted a deep concern with the ethical implications of data-driven technologies that have been implicated in anti-democratic practices, automated warfare, the spread of mis- and disinformation, and the perpetuation of racial and gender disparities. In the wake of this pushback, Silicon Valley tech companies have recently begun to invest resources in hiring personnel to design, develop, and manage a portfolio of practices that can adequately address these areas of ethical concern. In this talk we will discuss our recent research, in which we interviewed dozens of these "ethics owners" to understand how they understand their emerging job roles, the conceptual and organizational challenges that stand in their way, and the tensions that emerge from developing ethics practices inside and across their companies.
Biography: H. Andrew Schwartz is director of the Human Language Analysis Beings (HLAB) and Assistant Professor of Computer Science at Stony Brook University (SUNY). His interdisciplinary research focuses on developing large-scale language analyses for health and social sciences as well as integrating human factors into predictive models to improve natural language processing for diverse populations. He is an active member of many subcommunities of AI (e.g. NAACL, EMNLP, WWW), Psychology (e.g. Psychological Science, JPSP, PNAS), and health informatics (e.g. JMIR, Nature - Sci Reports, Am Jour. of Public Health). He was previously Lead Research Scientist for the World Well-Being Project at the University of Pennsylvania, an interdisciplinary team studying how big language analyses can reveal and predict differences in health, personality, and well-being. He received his PhD in Computer Science from the University of Central Florida in 2011 with research on acquiring lexical semantic knowledge from the Web.
Abstract: Natural Language Processing (NLP) conventionally focuses on modeling words, phrases, or documents. However, natural language is generated by people and with the growth of social media and automated assistants, NLP is increasingly tackling human problems that are social, psychological, or medical in nature. Language shared on Facebook and Twitter has already been used to measure characteristics from individual depression and personality to community well-being, mortality, and, recently, COVID symptom rates. In this talk I will summarize recent work from the Human Language Analysis Lab to further NLP towards modeling people as the originators of language. This includes controlling for and correcting biases from extralinguistic variables (demographics, socioeconomics), placing language in time (forecasting future outcomes), and leveraging the inherent multi-level structure (people, who belong to communities, generate language). Taken together, I will suggest that considering the people behind the language not only offers opportunities for improved accuracies but it could be fundamental to NLP's role in our increasingly digital world.
Biography: H. Andrew Schwartz is director of the Human Language Analysis Beings (HLAB) and Assistant Professor of Computer Science at Stony Brook University (SUNY). His interdisciplinary research focuses on developing large-scale language analyses for health and social sciences as well as integrating human factors into predictive models to improve natural language processing for diverse populations. He is an active member of many subcommunities of AI (e.g. NAACL, EMNLP, WWW), Psychology (e.g. Psychological Science, JPSP, PNAS), and health informatics (e.g. JMIR, Nature - Sci Reports, Am Jour. of Public Health). He was previously Lead Research Scientist for the World Well-Being Project at the University of Pennsylvania, an interdisciplinary team studying how big language analyses can reveal and predict differences in health, personality, and well-being. He received his PhD in Computer Science from the University of Central Florida in 2011 with research on acquiring lexical semantic knowledge from the Web.
Abstract: Natural Language Processing (NLP) conventionally focuses on modeling words, phrases, or documents. However, natural language is generated by people and with the growth of social media and automated assistants, NLP is increasingly tackling human problems that are social, psychological, or medical in nature. Language shared on Facebook and Twitter has already been used to measure characteristics from individual depression and personality to community well-being, mortality, and, recently, COVID symptom rates. In this talk I will summarize recent work from the Human Language Analysis Lab to further NLP towards modeling people as the originators of language. This includes controlling for and correcting biases from extralinguistic variables (demographics, socioeconomics), placing language in time (forecasting future outcomes), and leveraging the inherent multi-level structure (people, who belong to communities, generate language). Taken together, I will suggest that considering the people behind the language not only offers opportunities for improved accuracies but it could be fundamental to NLP's role in our increasingly digital world.