Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.
Sign Up For Latest News
The Future of Work and How the Workforce Adapts to Change
The Future of Work and How the Workforce Adapts to Change
In the Shadow of the ‘Smart Court’ - Examining China’s Applications of Courtroom AI
In the Shadow of the ‘Smart Court’ - Examining China’s Applications of Courtroom AI
Learning to See the Physical World
Scaffolding and Imitation Learning - Human Learning Principles Transferred to Robots
Scaffolding and Imitation Learning - Human Learning Principles Transferred to Robots
Beyond the buildings, lakes, homes, and businesses in Minneapolis, are people. People who are impacted by technology used by government. Civic Tech enhances the relationship between people, their community and government by giving a voice in public decision-making processes.
Beyond the buildings, lakes, homes, and businesses in Minneapolis, are people. People who are impacted by technology used by government. Civic Tech enhances the relationship between people, their community and government by giving a voice in public decision-making processes.
Natural language promises to be the ultimate interface for interacting with computers, allowing users to effortlessly tap into the wealth of digital information and extract insights from it.
Natural language promises to be the ultimate interface for interacting with computers, allowing users to effortlessly tap into the wealth of digital information and extract insights from it.
Natural Language Processing (NLP) conventionally focuses on modeling words, phrases, or documents. However, natural language is generated by people and with the growth of social media and automated assistants, NLP is increasingly tackling human problems that are social, psychological, or medical in nature.
Natural Language Processing (NLP) conventionally focuses on modeling words, phrases, or documents. However, natural language is generated by people and with the growth of social media and automated assistants, NLP is increasingly tackling human problems that are social, psychological, or medical in nature.
Bio: John Markoff is HAI’s Journalist-in-Residence. He is also a research affiliate at the Center for Advanced Study in the Behavioral Sciences or CASBS, participating in projects focusing on the future of work and artificial intelligence. He is currently researching a biography of Stewart Brand, the creator of the Whole Earth Catalog. Previously he was a Berggruen Fellow at CASBS. He has also been a staff historian at the Computer History Museum in Mountain View, Calif. Until 2017, he was a reporter at The New York Times, beginning in March 1988 as the paper’s national computer writer. Prior to joining the Times, he worked for the San Francisco Examiner. He has written about technology for Pacific News Service. He was a reporter at Infoworld and West Coast editor for Byte Magazine and wrote a column on personal computers for the San Jose Mercury. He has also been a lecturer at the University of California at Berkeley School of Journalism and an adjunct faculty member of the Stanford Graduate Program on Journalism. In 2013 he was awarded a Pulitzer Prize in explanatory reporting as part of a New York Times project on labor and automation. In 2007, he was named a fellow of the Society of Professional Journalists, the organization’s highest honor. In June of 2010, the New York Times presented him with the Nathaniel Nash Award, which is given annually for foreign and business reporting. He is the co-author of The High Cost of High Tech, published by Harper & Row. He co-wrote Cyberpunk: Outlaws and Hackers on the Computer Frontier published Simon & Schuster. Hyperion published Takedown: The Pursuit and Capture of America's Most Wanted Computer Outlaw, which he co-authored with Tsutomu Shimomura. What the Dormouse Said: How the Sixties Counterculture Shaped the Personal Computer Industry, was published by Viking Books. Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots, was published by HarperCollins Ecco. Markoff grew up in Palo Alto, California, and graduated from Whitman College, Walla Walla, Washington. He attended graduate school at the University of Oregon and received a masters degree in sociology.
Bio: John Markoff is HAI’s Journalist-in-Residence. He is also a research affiliate at the Center for Advanced Study in the Behavioral Sciences or CASBS, participating in projects focusing on the future of work and artificial intelligence. He is currently researching a biography of Stewart Brand, the creator of the Whole Earth Catalog. Previously he was a Berggruen Fellow at CASBS. He has also been a staff historian at the Computer History Museum in Mountain View, Calif. Until 2017, he was a reporter at The New York Times, beginning in March 1988 as the paper’s national computer writer. Prior to joining the Times, he worked for the San Francisco Examiner. He has written about technology for Pacific News Service. He was a reporter at Infoworld and West Coast editor for Byte Magazine and wrote a column on personal computers for the San Jose Mercury. He has also been a lecturer at the University of California at Berkeley School of Journalism and an adjunct faculty member of the Stanford Graduate Program on Journalism. In 2013 he was awarded a Pulitzer Prize in explanatory reporting as part of a New York Times project on labor and automation. In 2007, he was named a fellow of the Society of Professional Journalists, the organization’s highest honor. In June of 2010, the New York Times presented him with the Nathaniel Nash Award, which is given annually for foreign and business reporting. He is the co-author of The High Cost of High Tech, published by Harper & Row. He co-wrote Cyberpunk: Outlaws and Hackers on the Computer Frontier published Simon & Schuster. Hyperion published Takedown: The Pursuit and Capture of America's Most Wanted Computer Outlaw, which he co-authored with Tsutomu Shimomura. What the Dormouse Said: How the Sixties Counterculture Shaped the Personal Computer Industry, was published by Viking Books. Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots, was published by HarperCollins Ecco. Markoff grew up in Palo Alto, California, and graduated from Whitman College, Walla Walla, Washington. He attended graduate school at the University of Oregon and received a masters degree in sociology.
Bio: Marietje Schaake is an International Policy Fellow at the Stanford Institute for Human-Centered Artificial Intelligence (HAI) and the International Policy Director of the Cyber Policy Center, where she conducts policy-relevant research focused on cyber policy recommendations for industry and government. In addition to her own research, she represents the center to governments, NGOs, and the technology industry. Schaake also teaches courses on cyber policy from an international perspective, and brings to Stanford leaders from around the world to discuss cyber policy. Prior to joining Stanford, Marietje Schaake led an active career in politics and civic service. She was a representative of the Dutch Democratic Party and the Alliance of Liberals and Democrats for Europe (ALDE) in European Parliament where she was first elected in 2009. In European Parliament, Schaake focused on trade, foreign policy and technology, and as a member of the Global Commission on the Stability of Cyberspace, and founder of the European Parliament Intergroup on the European Digital Agenda, Schaake develops solutions to strengthen the rule of law online, including initiating the net neutrality law now in effect throughout Europe.
Bio: Marietje Schaake is an International Policy Fellow at the Stanford Institute for Human-Centered Artificial Intelligence (HAI) and the International Policy Director of the Cyber Policy Center, where she conducts policy-relevant research focused on cyber policy recommendations for industry and government. In addition to her own research, she represents the center to governments, NGOs, and the technology industry. Schaake also teaches courses on cyber policy from an international perspective, and brings to Stanford leaders from around the world to discuss cyber policy. Prior to joining Stanford, Marietje Schaake led an active career in politics and civic service. She was a representative of the Dutch Democratic Party and the Alliance of Liberals and Democrats for Europe (ALDE) in European Parliament where she was first elected in 2009. In European Parliament, Schaake focused on trade, foreign policy and technology, and as a member of the Global Commission on the Stability of Cyberspace, and founder of the European Parliament Intergroup on the European Digital Agenda, Schaake develops solutions to strengthen the rule of law online, including initiating the net neutrality law now in effect throughout Europe.
Abstract: New developments in Artificial Intelligence, particularly deep learning and other forms of “second-wave” AI, are attracting enormous public attention. Both triumphalists and doomsayers are predicting that human-level AI may be “just around the corner.” To assess whether that prediction is true, we need a broad understanding of intelligence, in terms of which to assess: (i) what kinds of intelligence machines currently have, and will likely have in the future; and (ii) what kinds of intelligence people currently have, and may be capable of in the future. As the first step in this direction, I distinguish two kinds of intelligence: (i) “reckoning,” the kind of calculative rationality that computers excel at, including both first- and second-wave AI; and (ii) “judgment,” a form of dispassionate, deliberative thought, grounded in ethical commitment and responsible action, that is appropriate to the situation in which it is deployed. AI will develop world-changing reckoning systems, I argue, but nothing in AI as currently conceived approaches what is required to build a system capable of judgment.
Bio: Brian Cantwell Smith is Reid Hoffman Professor of Artificial Intelligence and the Human at the University of Toronto, where he is also Professor of Information, Philosophy, Cognitive Science, and the History and Philosophy of Science and Technology, as well as being a Senior Fellow at Massey College. Smith’s research focuses on the philosophical foundations of computation, artificial intelligence, and mind, and on fundamental issues in metaphysics and epistemology. In the 1980s he developed the world’s first reflective programming language (3Lisp). He is the author of *On the Origin of Objects* (MIT Press, 1996), and of *On the Promise of Artificial Intelligence: Reckoning and Judgment* (MIT Press, 2019).
Abstract: New developments in Artificial Intelligence, particularly deep learning and other forms of “second-wave” AI, are attracting enormous public attention. Both triumphalists and doomsayers are predicting that human-level AI may be “just around the corner.” To assess whether that prediction is true, we need a broad understanding of intelligence, in terms of which to assess: (i) what kinds of intelligence machines currently have, and will likely have in the future; and (ii) what kinds of intelligence people currently have, and may be capable of in the future. As the first step in this direction, I distinguish two kinds of intelligence: (i) “reckoning,” the kind of calculative rationality that computers excel at, including both first- and second-wave AI; and (ii) “judgment,” a form of dispassionate, deliberative thought, grounded in ethical commitment and responsible action, that is appropriate to the situation in which it is deployed. AI will develop world-changing reckoning systems, I argue, but nothing in AI as currently conceived approaches what is required to build a system capable of judgment.
Bio: Brian Cantwell Smith is Reid Hoffman Professor of Artificial Intelligence and the Human at the University of Toronto, where he is also Professor of Information, Philosophy, Cognitive Science, and the History and Philosophy of Science and Technology, as well as being a Senior Fellow at Massey College. Smith’s research focuses on the philosophical foundations of computation, artificial intelligence, and mind, and on fundamental issues in metaphysics and epistemology. In the 1980s he developed the world’s first reflective programming language (3Lisp). He is the author of *On the Origin of Objects* (MIT Press, 1996), and of *On the Promise of Artificial Intelligence: Reckoning and Judgment* (MIT Press, 2019).
Abstract: The biggest challenge with the democratization of content is how to make sense of the scale. In the last decade, curation of content has consolidated into the hands of a few of the largest technology companies. Today, that curation takes the form of machine learning — often dubbed algorithms by the media. Thomas helped build and introduce the most controversial algorithms of Instagram: non-chronological feed and personalized recommendations. He will discuss challenges from the perspective of an engineer in the control room as Instagram scaled to serve over a billion people. Thomas will share a few of his thoughts about future directions as we start to form a dialogue about the responsibilities of platforms operating on a global scale.
Bio: Thomas Dimson is the original author of “The Algorithm” — the recommender systems behind Instagram's feed, stories and discovery surfaces. He joined Instagram as one of its first 50 employees in 2013, working for seven years as a principal engineer and eventually an engineering director. In that time, he also invented products such as the stories polling sticker, Hyperlapse, and engineering and was named one of the top ten most creative people in business by Fast Company. Thomas graduated from the University of Waterloo with a bachelor's of mathematics and received his master's in computer science from Stanford with a specialization in artificial intelligence.
Abstract: The biggest challenge with the democratization of content is how to make sense of the scale. In the last decade, curation of content has consolidated into the hands of a few of the largest technology companies. Today, that curation takes the form of machine learning — often dubbed algorithms by the media. Thomas helped build and introduce the most controversial algorithms of Instagram: non-chronological feed and personalized recommendations. He will discuss challenges from the perspective of an engineer in the control room as Instagram scaled to serve over a billion people. Thomas will share a few of his thoughts about future directions as we start to form a dialogue about the responsibilities of platforms operating on a global scale.
Bio: Thomas Dimson is the original author of “The Algorithm” — the recommender systems behind Instagram's feed, stories and discovery surfaces. He joined Instagram as one of its first 50 employees in 2013, working for seven years as a principal engineer and eventually an engineering director. In that time, he also invented products such as the stories polling sticker, Hyperlapse, and engineering and was named one of the top ten most creative people in business by Fast Company. Thomas graduated from the University of Waterloo with a bachelor's of mathematics and received his master's in computer science from Stanford with a specialization in artificial intelligence.