Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.
Sign Up For Latest News
Most studies of automation focus on manufacturing or use aggregate data. In one of the first studies of the service sector using establishment-level data, we examine the impact of robot adoption on staffing in nursing homes.
Most studies of automation focus on manufacturing or use aggregate data. In one of the first studies of the service sector using establishment-level data, we examine the impact of robot adoption on staffing in nursing homes.
Natural Language Processing (NLP) conventionally focuses on modeling words, phrases, or documents. However, natural language is generated by people and with the growth of social media and automated assistants, NLP is increasingly tackling human problems that are social, psychological, or medical in nature.
Natural Language Processing (NLP) conventionally focuses on modeling words, phrases, or documents. However, natural language is generated by people and with the growth of social media and automated assistants, NLP is increasingly tackling human problems that are social, psychological, or medical in nature.
Bio: John Markoff is HAI’s Journalist-in-Residence. He is also a research affiliate at the Center for Advanced Study in the Behavioral Sciences or CASBS, participating in projects focusing on the future of work and artificial intelligence. He is currently researching a biography of Stewart Brand, the creator of the Whole Earth Catalog. Previously he was a Berggruen Fellow at CASBS. He has also been a staff historian at the Computer History Museum in Mountain View, Calif. Until 2017, he was a reporter at The New York Times, beginning in March 1988 as the paper’s national computer writer. Prior to joining the Times, he worked for the San Francisco Examiner. He has written about technology for Pacific News Service. He was a reporter at Infoworld and West Coast editor for Byte Magazine and wrote a column on personal computers for the San Jose Mercury. He has also been a lecturer at the University of California at Berkeley School of Journalism and an adjunct faculty member of the Stanford Graduate Program on Journalism. In 2013 he was awarded a Pulitzer Prize in explanatory reporting as part of a New York Times project on labor and automation. In 2007, he was named a fellow of the Society of Professional Journalists, the organization’s highest honor. In June of 2010, the New York Times presented him with the Nathaniel Nash Award, which is given annually for foreign and business reporting. He is the co-author of The High Cost of High Tech, published by Harper & Row. He co-wrote Cyberpunk: Outlaws and Hackers on the Computer Frontier published Simon & Schuster. Hyperion published Takedown: The Pursuit and Capture of America's Most Wanted Computer Outlaw, which he co-authored with Tsutomu Shimomura. What the Dormouse Said: How the Sixties Counterculture Shaped the Personal Computer Industry, was published by Viking Books. Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots, was published by HarperCollins Ecco. Markoff grew up in Palo Alto, California, and graduated from Whitman College, Walla Walla, Washington. He attended graduate school at the University of Oregon and received a masters degree in sociology.
Bio: John Markoff is HAI’s Journalist-in-Residence. He is also a research affiliate at the Center for Advanced Study in the Behavioral Sciences or CASBS, participating in projects focusing on the future of work and artificial intelligence. He is currently researching a biography of Stewart Brand, the creator of the Whole Earth Catalog. Previously he was a Berggruen Fellow at CASBS. He has also been a staff historian at the Computer History Museum in Mountain View, Calif. Until 2017, he was a reporter at The New York Times, beginning in March 1988 as the paper’s national computer writer. Prior to joining the Times, he worked for the San Francisco Examiner. He has written about technology for Pacific News Service. He was a reporter at Infoworld and West Coast editor for Byte Magazine and wrote a column on personal computers for the San Jose Mercury. He has also been a lecturer at the University of California at Berkeley School of Journalism and an adjunct faculty member of the Stanford Graduate Program on Journalism. In 2013 he was awarded a Pulitzer Prize in explanatory reporting as part of a New York Times project on labor and automation. In 2007, he was named a fellow of the Society of Professional Journalists, the organization’s highest honor. In June of 2010, the New York Times presented him with the Nathaniel Nash Award, which is given annually for foreign and business reporting. He is the co-author of The High Cost of High Tech, published by Harper & Row. He co-wrote Cyberpunk: Outlaws and Hackers on the Computer Frontier published Simon & Schuster. Hyperion published Takedown: The Pursuit and Capture of America's Most Wanted Computer Outlaw, which he co-authored with Tsutomu Shimomura. What the Dormouse Said: How the Sixties Counterculture Shaped the Personal Computer Industry, was published by Viking Books. Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots, was published by HarperCollins Ecco. Markoff grew up in Palo Alto, California, and graduated from Whitman College, Walla Walla, Washington. He attended graduate school at the University of Oregon and received a masters degree in sociology.
Bio: Marietje Schaake is an International Policy Fellow at the Stanford Institute for Human-Centered Artificial Intelligence (HAI) and the International Policy Director of the Cyber Policy Center, where she conducts policy-relevant research focused on cyber policy recommendations for industry and government. In addition to her own research, she represents the center to governments, NGOs, and the technology industry. Schaake also teaches courses on cyber policy from an international perspective, and brings to Stanford leaders from around the world to discuss cyber policy. Prior to joining Stanford, Marietje Schaake led an active career in politics and civic service. She was a representative of the Dutch Democratic Party and the Alliance of Liberals and Democrats for Europe (ALDE) in European Parliament where she was first elected in 2009. In European Parliament, Schaake focused on trade, foreign policy and technology, and as a member of the Global Commission on the Stability of Cyberspace, and founder of the European Parliament Intergroup on the European Digital Agenda, Schaake develops solutions to strengthen the rule of law online, including initiating the net neutrality law now in effect throughout Europe.
Bio: Marietje Schaake is an International Policy Fellow at the Stanford Institute for Human-Centered Artificial Intelligence (HAI) and the International Policy Director of the Cyber Policy Center, where she conducts policy-relevant research focused on cyber policy recommendations for industry and government. In addition to her own research, she represents the center to governments, NGOs, and the technology industry. Schaake also teaches courses on cyber policy from an international perspective, and brings to Stanford leaders from around the world to discuss cyber policy. Prior to joining Stanford, Marietje Schaake led an active career in politics and civic service. She was a representative of the Dutch Democratic Party and the Alliance of Liberals and Democrats for Europe (ALDE) in European Parliament where she was first elected in 2009. In European Parliament, Schaake focused on trade, foreign policy and technology, and as a member of the Global Commission on the Stability of Cyberspace, and founder of the European Parliament Intergroup on the European Digital Agenda, Schaake develops solutions to strengthen the rule of law online, including initiating the net neutrality law now in effect throughout Europe.
Abstract: New developments in Artificial Intelligence, particularly deep learning and other forms of “second-wave” AI, are attracting enormous public attention. Both triumphalists and doomsayers are predicting that human-level AI may be “just around the corner.” To assess whether that prediction is true, we need a broad understanding of intelligence, in terms of which to assess: (i) what kinds of intelligence machines currently have, and will likely have in the future; and (ii) what kinds of intelligence people currently have, and may be capable of in the future. As the first step in this direction, I distinguish two kinds of intelligence: (i) “reckoning,” the kind of calculative rationality that computers excel at, including both first- and second-wave AI; and (ii) “judgment,” a form of dispassionate, deliberative thought, grounded in ethical commitment and responsible action, that is appropriate to the situation in which it is deployed. AI will develop world-changing reckoning systems, I argue, but nothing in AI as currently conceived approaches what is required to build a system capable of judgment.
Bio: Brian Cantwell Smith is Reid Hoffman Professor of Artificial Intelligence and the Human at the University of Toronto, where he is also Professor of Information, Philosophy, Cognitive Science, and the History and Philosophy of Science and Technology, as well as being a Senior Fellow at Massey College. Smith’s research focuses on the philosophical foundations of computation, artificial intelligence, and mind, and on fundamental issues in metaphysics and epistemology. In the 1980s he developed the world’s first reflective programming language (3Lisp). He is the author of *On the Origin of Objects* (MIT Press, 1996), and of *On the Promise of Artificial Intelligence: Reckoning and Judgment* (MIT Press, 2019).
Abstract: New developments in Artificial Intelligence, particularly deep learning and other forms of “second-wave” AI, are attracting enormous public attention. Both triumphalists and doomsayers are predicting that human-level AI may be “just around the corner.” To assess whether that prediction is true, we need a broad understanding of intelligence, in terms of which to assess: (i) what kinds of intelligence machines currently have, and will likely have in the future; and (ii) what kinds of intelligence people currently have, and may be capable of in the future. As the first step in this direction, I distinguish two kinds of intelligence: (i) “reckoning,” the kind of calculative rationality that computers excel at, including both first- and second-wave AI; and (ii) “judgment,” a form of dispassionate, deliberative thought, grounded in ethical commitment and responsible action, that is appropriate to the situation in which it is deployed. AI will develop world-changing reckoning systems, I argue, but nothing in AI as currently conceived approaches what is required to build a system capable of judgment.
Bio: Brian Cantwell Smith is Reid Hoffman Professor of Artificial Intelligence and the Human at the University of Toronto, where he is also Professor of Information, Philosophy, Cognitive Science, and the History and Philosophy of Science and Technology, as well as being a Senior Fellow at Massey College. Smith’s research focuses on the philosophical foundations of computation, artificial intelligence, and mind, and on fundamental issues in metaphysics and epistemology. In the 1980s he developed the world’s first reflective programming language (3Lisp). He is the author of *On the Origin of Objects* (MIT Press, 1996), and of *On the Promise of Artificial Intelligence: Reckoning and Judgment* (MIT Press, 2019).
Rahul Panicker, Chief Innovation Officer, Wadhwani AI
Abstract: From our experience developing and deploying AI-for-social-good solutions to help healthcare workers in villages of developing countries weigh newborns using just a smartphone, cotton farmers fight pest attacks, and tuberculosis-control programs find and support TB patients, and advising organizations like the WHO, UN ITU, and governments on AI, I will share some lessons learned, and opportunities for AI to have large-scale impact across domains like health, agriculture, education, and financial inclusion. Such impact will require novel approaches across algorithms, human factors, regulatory frameworks, and systems thinking. AI-for-social-good also offers a rich source of problems for AI, spanning computer vision, weakly-supervised learning, causal reasoning, domain adaptation, uncertainty calibration, explainability, computing on low-resource devices, and privacy-preserving learning. The Wadhwani Institute for Artificial Intelligence is an independent nonprofit research institute that develops and deploys AI-for-social-good solutions in the developing world. Bio: Dr. Rahul Panicker, as Chief Innovation Officer, heads research at the Wadhwani Institute for Artificial Intelligence. Prior to this, he was co-founder of Embrace, a for-profit social enterprise that has helped over 500,000 babies worldwide through low-cost incubators that work without electricity. He is an MIT TR35 awardee, World Economic Forum Social Entrepreneur of the Year, Industrial Design Society of America Gold winner, and an Echoing Green Fellow. He holds an MS/PhD in EE from Stanford University, is an alumnus of the Stanford d.school, and has a B.Tech from IIT Madras.Rahul Panicker, Chief Innovation Officer, Wadhwani AI
Abstract: From our experience developing and deploying AI-for-social-good solutions to help healthcare workers in villages of developing countries weigh newborns using just a smartphone, cotton farmers fight pest attacks, and tuberculosis-control programs find and support TB patients, and advising organizations like the WHO, UN ITU, and governments on AI, I will share some lessons learned, and opportunities for AI to have large-scale impact across domains like health, agriculture, education, and financial inclusion. Such impact will require novel approaches across algorithms, human factors, regulatory frameworks, and systems thinking. AI-for-social-good also offers a rich source of problems for AI, spanning computer vision, weakly-supervised learning, causal reasoning, domain adaptation, uncertainty calibration, explainability, computing on low-resource devices, and privacy-preserving learning. The Wadhwani Institute for Artificial Intelligence is an independent nonprofit research institute that develops and deploys AI-for-social-good solutions in the developing world. Bio: Dr. Rahul Panicker, as Chief Innovation Officer, heads research at the Wadhwani Institute for Artificial Intelligence. Prior to this, he was co-founder of Embrace, a for-profit social enterprise that has helped over 500,000 babies worldwide through low-cost incubators that work without electricity. He is an MIT TR35 awardee, World Economic Forum Social Entrepreneur of the Year, Industrial Design Society of America Gold winner, and an Echoing Green Fellow. He holds an MS/PhD in EE from Stanford University, is an alumnus of the Stanford d.school, and has a B.Tech from IIT Madras.Abstract: The biggest challenge with the democratization of content is how to make sense of the scale. In the last decade, curation of content has consolidated into the hands of a few of the largest technology companies. Today, that curation takes the form of machine learning — often dubbed algorithms by the media. Thomas helped build and introduce the most controversial algorithms of Instagram: non-chronological feed and personalized recommendations. He will discuss challenges from the perspective of an engineer in the control room as Instagram scaled to serve over a billion people. Thomas will share a few of his thoughts about future directions as we start to form a dialogue about the responsibilities of platforms operating on a global scale.
Bio: Thomas Dimson is the original author of “The Algorithm” — the recommender systems behind Instagram's feed, stories and discovery surfaces. He joined Instagram as one of its first 50 employees in 2013, working for seven years as a principal engineer and eventually an engineering director. In that time, he also invented products such as the stories polling sticker, Hyperlapse, and engineering and was named one of the top ten most creative people in business by Fast Company. Thomas graduated from the University of Waterloo with a bachelor's of mathematics and received his master's in computer science from Stanford with a specialization in artificial intelligence.
Abstract: The biggest challenge with the democratization of content is how to make sense of the scale. In the last decade, curation of content has consolidated into the hands of a few of the largest technology companies. Today, that curation takes the form of machine learning — often dubbed algorithms by the media. Thomas helped build and introduce the most controversial algorithms of Instagram: non-chronological feed and personalized recommendations. He will discuss challenges from the perspective of an engineer in the control room as Instagram scaled to serve over a billion people. Thomas will share a few of his thoughts about future directions as we start to form a dialogue about the responsibilities of platforms operating on a global scale.
Bio: Thomas Dimson is the original author of “The Algorithm” — the recommender systems behind Instagram's feed, stories and discovery surfaces. He joined Instagram as one of its first 50 employees in 2013, working for seven years as a principal engineer and eventually an engineering director. In that time, he also invented products such as the stories polling sticker, Hyperlapse, and engineering and was named one of the top ten most creative people in business by Fast Company. Thomas graduated from the University of Waterloo with a bachelor's of mathematics and received his master's in computer science from Stanford with a specialization in artificial intelligence.
Workshop Leader: Emilie Silva
A one-day interdisciplinary workshop involving Stanford faculty and researchers, a select number of outside academics from other institutions, and a small number of private sector and governmental analysts to focus on the intersection of AI and various aspects of international security. The goal was to identify concrete research agendas and synergies, identify gaps in our understanding, and build a network of scholars and experts to address these challenges.
The past several years have seen startling advances in artificial intelligence and machine learning,
driven in part by advances in deep neural networks.3 AI-enabled machines can now meet or exceed
human abilities in a wide range of tasks, including chess, Jeopardy, Go, poker, object recognition,
and driving in some settings. AI systems are being applied to solve a range of problems in
transportation, finance, stock trading, health care, intelligence analysis, and cybersecurity. Despite
calls from prominent scientists to avoid militarizing AI,4 nation-states are certain to use AI and
machine learning tools for national security purposes.
A technology that has the potential for such sweeping changes across human society should be
evaluated for its potential effects on international stability. Many national security applications of
AI could be beneficial, such as advanced cyber defenses that can identify new malware, automated
computer security tools to find and patch vulnerabilities, or machine learning systems to uncover
suspicious behavior by terrorists. Current AI systems have substantial limitations and
vulnerabilities, however, and a headlong rush into national security applications of artificial
intelligence could pose risks to international stability. Some security related applications of AI
could be destabilizing, and competitive dynamics between nations could lead to harmful
consequences such as a “race to the bottom” on AI safety. Other security related applications of AI
could improve international stability.
CNAS is undertaking a two-year, in-depth, interdisciplinary project to examine how artificial
intelligence will influence international security and stability. It is critical for global stability to
begin to a discussion about ways to mitigate the risks while taking advantage of the benefits of
autonomous systems and artificial intelligence. This project will build a community from three
sectors of academia, business, and the policy world that often do not intersect – AI researchers in
academia and business; international security academic experts; and policy practitioners in the
government, both civilian and military. Through a series of workshops, commissioned papers, and
reports, this project will foster a community of practice and begin laying the foundations for a field
of study on AI and international security. The project will conclude with recommendations to
policymakers for ways to capitalize on the potential stabilizing benefits of artificial intelligence,
while avoiding uses that could undermine stability.
Workshop Leader: Emilie Silva
A one-day interdisciplinary workshop involving Stanford faculty and researchers, a select number of outside academics from other institutions, and a small number of private sector and governmental analysts to focus on the intersection of AI and various aspects of international security. The goal was to identify concrete research agendas and synergies, identify gaps in our understanding, and build a network of scholars and experts to address these challenges.
The past several years have seen startling advances in artificial intelligence and machine learning,
driven in part by advances in deep neural networks.3 AI-enabled machines can now meet or exceed
human abilities in a wide range of tasks, including chess, Jeopardy, Go, poker, object recognition,
and driving in some settings. AI systems are being applied to solve a range of problems in
transportation, finance, stock trading, health care, intelligence analysis, and cybersecurity. Despite
calls from prominent scientists to avoid militarizing AI,4 nation-states are certain to use AI and
machine learning tools for national security purposes.
A technology that has the potential for such sweeping changes across human society should be
evaluated for its potential effects on international stability. Many national security applications of
AI could be beneficial, such as advanced cyber defenses that can identify new malware, automated
computer security tools to find and patch vulnerabilities, or machine learning systems to uncover
suspicious behavior by terrorists. Current AI systems have substantial limitations and
vulnerabilities, however, and a headlong rush into national security applications of artificial
intelligence could pose risks to international stability. Some security related applications of AI
could be destabilizing, and competitive dynamics between nations could lead to harmful
consequences such as a “race to the bottom” on AI safety. Other security related applications of AI
could improve international stability.
CNAS is undertaking a two-year, in-depth, interdisciplinary project to examine how artificial
intelligence will influence international security and stability. It is critical for global stability to
begin to a discussion about ways to mitigate the risks while taking advantage of the benefits of
autonomous systems and artificial intelligence. This project will build a community from three
sectors of academia, business, and the policy world that often do not intersect – AI researchers in
academia and business; international security academic experts; and policy practitioners in the
government, both civilian and military. Through a series of workshops, commissioned papers, and
reports, this project will foster a community of practice and begin laying the foundations for a field
of study on AI and international security. The project will conclude with recommendations to
policymakers for ways to capitalize on the potential stabilizing benefits of artificial intelligence,
while avoiding uses that could undermine stability.
This talk develops the proposal that a central – and neglected – ethical challenge for the field of AI is demystification of the techniques and technologies that constitute it
This talk develops the proposal that a central – and neglected – ethical challenge for the field of AI is demystification of the techniques and technologies that constitute it
Join California Supreme Court Justice Cuéllar, who teaches the popular “Regulating AI” course at Stanford, Dan Ho, Associate Director of HAI and professor at the law school and political science, and Terah Lyons, the Founding Executive Director of the Partnership on AI, for a conversation on the law, regulation, and governance of AI! The three will provide a range of perspectives on the promise, challenges, and directions for AI governance.
Join California Supreme Court Justice Cuéllar, who teaches the popular “Regulating AI” course at Stanford, Dan Ho, Associate Director of HAI and professor at the law school and political science, and Terah Lyons, the Founding Executive Director of the Partnership on AI, for a conversation on the law, regulation, and governance of AI! The three will provide a range of perspectives on the promise, challenges, and directions for AI governance.
AI for Healthcare session will feature Marzyeh Ghassemi who targets “Healthy ML” focusing on creating and applying machine learning to understand and improve health. Improving health requires targeting and evidence. Marzyeh tackles part of this puzzle with machine learning. This session will cover some of the novel technical opportunities for machine learning in health challenges and the important progress to be made with a careful application to domain. She will also walk through the danger of applying methods without a robust understanding of the domain, and potential downstream uses.
AI for Healthcare session will feature Marzyeh Ghassemi who targets “Healthy ML” focusing on creating and applying machine learning to understand and improve health. Improving health requires targeting and evidence. Marzyeh tackles part of this puzzle with machine learning. This session will cover some of the novel technical opportunities for machine learning in health challenges and the important progress to be made with a careful application to domain. She will also walk through the danger of applying methods without a robust understanding of the domain, and potential downstream uses.
Recent advances of artificial intelligence and deep learning have been undoubtedly driven by a large amount of data amassed over the years, helping firms, researchers, and practitioners achieve many amazing feats, most notably in recognition tasks often surpassing human ability in several benchmarks.
Recent advances of artificial intelligence and deep learning have been undoubtedly driven by a large amount of data amassed over the years, helping firms, researchers, and practitioners achieve many amazing feats, most notably in recognition tasks often surpassing human ability in several benchmarks.