Bio: John Markoff is HAI’s Journalist-in-Residence. He is also a research affiliate at the Center for Advanced Study in the Behavioral Sciences or CASBS, participating in projects focusing on the future of work and artificial intelligence. He is currently researching a biography of Stewart Brand, the creator of the Whole Earth Catalog. Previously he was a Berggruen Fellow at CASBS. He has also been a staff historian at the Computer History Museum in Mountain View, Calif. Until 2017, he was a reporter at The New York Times, beginning in March 1988 as the paper’s national computer writer. Prior to joining the Times, he worked for the San Francisco Examiner. He has written about technology for Pacific News Service. He was a reporter at Infoworld and West Coast editor for Byte Magazine and wrote a column on personal computers for the San Jose Mercury. He has also been a lecturer at the University of California at Berkeley School of Journalism and an adjunct faculty member of the Stanford Graduate Program on Journalism. In 2013 he was awarded a Pulitzer Prize in explanatory reporting as part of a New York Times project on labor and automation. In 2007, he was named a fellow of the Society of Professional Journalists, the organization’s highest honor. In June of 2010, the New York Times presented him with the Nathaniel Nash Award, which is given annually for foreign and business reporting. He is the co-author of The High Cost of High Tech, published by Harper & Row. He co-wrote Cyberpunk: Outlaws and Hackers on the Computer Frontier published Simon & Schuster. Hyperion published Takedown: The Pursuit and Capture of America's Most Wanted Computer Outlaw, which he co-authored with Tsutomu Shimomura. What the Dormouse Said: How the Sixties Counterculture Shaped the Personal Computer Industry, was published by Viking Books. Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots, was published by HarperCollins Ecco. Markoff grew up in Palo Alto, California, and graduated from Whitman College, Walla Walla, Washington. He attended graduate school at the University of Oregon and received a masters degree in sociology.
Bio: John Markoff is HAI’s Journalist-in-Residence. He is also a research affiliate at the Center for Advanced Study in the Behavioral Sciences or CASBS, participating in projects focusing on the future of work and artificial intelligence. He is currently researching a biography of Stewart Brand, the creator of the Whole Earth Catalog. Previously he was a Berggruen Fellow at CASBS. He has also been a staff historian at the Computer History Museum in Mountain View, Calif. Until 2017, he was a reporter at The New York Times, beginning in March 1988 as the paper’s national computer writer. Prior to joining the Times, he worked for the San Francisco Examiner. He has written about technology for Pacific News Service. He was a reporter at Infoworld and West Coast editor for Byte Magazine and wrote a column on personal computers for the San Jose Mercury. He has also been a lecturer at the University of California at Berkeley School of Journalism and an adjunct faculty member of the Stanford Graduate Program on Journalism. In 2013 he was awarded a Pulitzer Prize in explanatory reporting as part of a New York Times project on labor and automation. In 2007, he was named a fellow of the Society of Professional Journalists, the organization’s highest honor. In June of 2010, the New York Times presented him with the Nathaniel Nash Award, which is given annually for foreign and business reporting. He is the co-author of The High Cost of High Tech, published by Harper & Row. He co-wrote Cyberpunk: Outlaws and Hackers on the Computer Frontier published Simon & Schuster. Hyperion published Takedown: The Pursuit and Capture of America's Most Wanted Computer Outlaw, which he co-authored with Tsutomu Shimomura. What the Dormouse Said: How the Sixties Counterculture Shaped the Personal Computer Industry, was published by Viking Books. Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots, was published by HarperCollins Ecco. Markoff grew up in Palo Alto, California, and graduated from Whitman College, Walla Walla, Washington. He attended graduate school at the University of Oregon and received a masters degree in sociology.
Bio: Marietje Schaake is an International Policy Fellow at the Stanford Institute for Human-Centered Artificial Intelligence (HAI) and the International Policy Director of the Cyber Policy Center, where she conducts policy-relevant research focused on cyber policy recommendations for industry and government. In addition to her own research, she represents the center to governments, NGOs, and the technology industry. Schaake also teaches courses on cyber policy from an international perspective, and brings to Stanford leaders from around the world to discuss cyber policy. Prior to joining Stanford, Marietje Schaake led an active career in politics and civic service. She was a representative of the Dutch Democratic Party and the Alliance of Liberals and Democrats for Europe (ALDE) in European Parliament where she was first elected in 2009. In European Parliament, Schaake focused on trade, foreign policy and technology, and as a member of the Global Commission on the Stability of Cyberspace, and founder of the European Parliament Intergroup on the European Digital Agenda, Schaake develops solutions to strengthen the rule of law online, including initiating the net neutrality law now in effect throughout Europe.
Bio: Marietje Schaake is an International Policy Fellow at the Stanford Institute for Human-Centered Artificial Intelligence (HAI) and the International Policy Director of the Cyber Policy Center, where she conducts policy-relevant research focused on cyber policy recommendations for industry and government. In addition to her own research, she represents the center to governments, NGOs, and the technology industry. Schaake also teaches courses on cyber policy from an international perspective, and brings to Stanford leaders from around the world to discuss cyber policy. Prior to joining Stanford, Marietje Schaake led an active career in politics and civic service. She was a representative of the Dutch Democratic Party and the Alliance of Liberals and Democrats for Europe (ALDE) in European Parliament where she was first elected in 2009. In European Parliament, Schaake focused on trade, foreign policy and technology, and as a member of the Global Commission on the Stability of Cyberspace, and founder of the European Parliament Intergroup on the European Digital Agenda, Schaake develops solutions to strengthen the rule of law online, including initiating the net neutrality law now in effect throughout Europe.
Abstract: New developments in Artificial Intelligence, particularly deep learning and other forms of “second-wave” AI, are attracting enormous public attention. Both triumphalists and doomsayers are predicting that human-level AI may be “just around the corner.” To assess whether that prediction is true, we need a broad understanding of intelligence, in terms of which to assess: (i) what kinds of intelligence machines currently have, and will likely have in the future; and (ii) what kinds of intelligence people currently have, and may be capable of in the future. As the first step in this direction, I distinguish two kinds of intelligence: (i) “reckoning,” the kind of calculative rationality that computers excel at, including both first- and second-wave AI; and (ii) “judgment,” a form of dispassionate, deliberative thought, grounded in ethical commitment and responsible action, that is appropriate to the situation in which it is deployed. AI will develop world-changing reckoning systems, I argue, but nothing in AI as currently conceived approaches what is required to build a system capable of judgment.
Bio: Brian Cantwell Smith is Reid Hoffman Professor of Artificial Intelligence and the Human at the University of Toronto, where he is also Professor of Information, Philosophy, Cognitive Science, and the History and Philosophy of Science and Technology, as well as being a Senior Fellow at Massey College. Smith’s research focuses on the philosophical foundations of computation, artificial intelligence, and mind, and on fundamental issues in metaphysics and epistemology. In the 1980s he developed the world’s first reflective programming language (3Lisp). He is the author of *On the Origin of Objects* (MIT Press, 1996), and of *On the Promise of Artificial Intelligence: Reckoning and Judgment* (MIT Press, 2019).
Abstract: New developments in Artificial Intelligence, particularly deep learning and other forms of “second-wave” AI, are attracting enormous public attention. Both triumphalists and doomsayers are predicting that human-level AI may be “just around the corner.” To assess whether that prediction is true, we need a broad understanding of intelligence, in terms of which to assess: (i) what kinds of intelligence machines currently have, and will likely have in the future; and (ii) what kinds of intelligence people currently have, and may be capable of in the future. As the first step in this direction, I distinguish two kinds of intelligence: (i) “reckoning,” the kind of calculative rationality that computers excel at, including both first- and second-wave AI; and (ii) “judgment,” a form of dispassionate, deliberative thought, grounded in ethical commitment and responsible action, that is appropriate to the situation in which it is deployed. AI will develop world-changing reckoning systems, I argue, but nothing in AI as currently conceived approaches what is required to build a system capable of judgment.
Bio: Brian Cantwell Smith is Reid Hoffman Professor of Artificial Intelligence and the Human at the University of Toronto, where he is also Professor of Information, Philosophy, Cognitive Science, and the History and Philosophy of Science and Technology, as well as being a Senior Fellow at Massey College. Smith’s research focuses on the philosophical foundations of computation, artificial intelligence, and mind, and on fundamental issues in metaphysics and epistemology. In the 1980s he developed the world’s first reflective programming language (3Lisp). He is the author of *On the Origin of Objects* (MIT Press, 1996), and of *On the Promise of Artificial Intelligence: Reckoning and Judgment* (MIT Press, 2019).
Abstract: The biggest challenge with the democratization of content is how to make sense of the scale. In the last decade, curation of content has consolidated into the hands of a few of the largest technology companies. Today, that curation takes the form of machine learning — often dubbed algorithms by the media. Thomas helped build and introduce the most controversial algorithms of Instagram: non-chronological feed and personalized recommendations. He will discuss challenges from the perspective of an engineer in the control room as Instagram scaled to serve over a billion people. Thomas will share a few of his thoughts about future directions as we start to form a dialogue about the responsibilities of platforms operating on a global scale.
Bio: Thomas Dimson is the original author of “The Algorithm” — the recommender systems behind Instagram's feed, stories and discovery surfaces. He joined Instagram as one of its first 50 employees in 2013, working for seven years as a principal engineer and eventually an engineering director. In that time, he also invented products such as the stories polling sticker, Hyperlapse, and engineering and was named one of the top ten most creative people in business by Fast Company. Thomas graduated from the University of Waterloo with a bachelor's of mathematics and received his master's in computer science from Stanford with a specialization in artificial intelligence.
Abstract: The biggest challenge with the democratization of content is how to make sense of the scale. In the last decade, curation of content has consolidated into the hands of a few of the largest technology companies. Today, that curation takes the form of machine learning — often dubbed algorithms by the media. Thomas helped build and introduce the most controversial algorithms of Instagram: non-chronological feed and personalized recommendations. He will discuss challenges from the perspective of an engineer in the control room as Instagram scaled to serve over a billion people. Thomas will share a few of his thoughts about future directions as we start to form a dialogue about the responsibilities of platforms operating on a global scale.
Bio: Thomas Dimson is the original author of “The Algorithm” — the recommender systems behind Instagram's feed, stories and discovery surfaces. He joined Instagram as one of its first 50 employees in 2013, working for seven years as a principal engineer and eventually an engineering director. In that time, he also invented products such as the stories polling sticker, Hyperlapse, and engineering and was named one of the top ten most creative people in business by Fast Company. Thomas graduated from the University of Waterloo with a bachelor's of mathematics and received his master's in computer science from Stanford with a specialization in artificial intelligence.
This talk develops the proposal that a central – and neglected – ethical challenge for the field of AI is demystification of the techniques and technologies that constitute it. Demystification goes beyond questions of fairness, accuracy and transparency (although those are certainly relevant), to the problem of how we might set out clearly the prerequisites for the efficacy of AI’s operations. To make more concrete what she means by demystification, Lucy will examine the case of so-called ‘pattern of life’ analysis in the designation of persons and activities identified as posing a threat to the security of the US homeland. ‘Human-centered AI’ takes on a darker meaning in this context, as the human becomes centered in the cross hairs of a system of targeting, whether for assassination or incarceration. Lucy will close with some suggestions for how we might proceed with the project of demystification, beginning with an articulation of the limiting conditions as well as the unprecedented powers of contemporary algorithmic systems.
This talk develops the proposal that a central – and neglected – ethical challenge for the field of AI is demystification of the techniques and technologies that constitute it. Demystification goes beyond questions of fairness, accuracy and transparency (although those are certainly relevant), to the problem of how we might set out clearly the prerequisites for the efficacy of AI’s operations. To make more concrete what she means by demystification, Lucy will examine the case of so-called ‘pattern of life’ analysis in the designation of persons and activities identified as posing a threat to the security of the US homeland. ‘Human-centered AI’ takes on a darker meaning in this context, as the human becomes centered in the cross hairs of a system of targeting, whether for assassination or incarceration. Lucy will close with some suggestions for how we might proceed with the project of demystification, beginning with an articulation of the limiting conditions as well as the unprecedented powers of contemporary algorithmic systems.
Abstract:
Recent advances of artificial intelligence and deep learning have been undoubtedly driven by a large amount of data amassed over the years, helping firms, researchers, and practitioners achieve many amazing feats, most notably in recognition tasks often surpassing human ability in several benchmarks. The yield, however, doesn’t seem equally distributed to all who aspire to repeat the success of others in their respective domains, due to the data themselves. A selected few are running away with the infrastructure and the competence they’ve built over time to collect and process the data, leaving many others behind. For some, it’s a struggle to find ways how to get them in the first place, and for some others it’s about figuring out what to do with them. And while many give their data away without knowing what they get in return, the growing awareness of the issue by the public and the thought leaders is being materialized into new regulations and suggestions on how the data should be governed and shared. In this seminar, Bongjun Ko, an AI Engineering Fellow at Stanford HAI, would like to share his thoughts on the this issue, drawing from the experience as an engineer who’s been trying to overcome the lack of data when building data-driven solutions, and as an individual who’s been providing the “new oil in 21st century”. Some of the open questions he would like to cast include: What can you do to remain competitive without data? Is data really a new oil? How much is a piece of data worth, and can it be measured?
Abstract:
Recent advances of artificial intelligence and deep learning have been undoubtedly driven by a large amount of data amassed over the years, helping firms, researchers, and practitioners achieve many amazing feats, most notably in recognition tasks often surpassing human ability in several benchmarks. The yield, however, doesn’t seem equally distributed to all who aspire to repeat the success of others in their respective domains, due to the data themselves. A selected few are running away with the infrastructure and the competence they’ve built over time to collect and process the data, leaving many others behind. For some, it’s a struggle to find ways how to get them in the first place, and for some others it’s about figuring out what to do with them. And while many give their data away without knowing what they get in return, the growing awareness of the issue by the public and the thought leaders is being materialized into new regulations and suggestions on how the data should be governed and shared. In this seminar, Bongjun Ko, an AI Engineering Fellow at Stanford HAI, would like to share his thoughts on the this issue, drawing from the experience as an engineer who’s been trying to overcome the lack of data when building data-driven solutions, and as an individual who’s been providing the “new oil in 21st century”. Some of the open questions he would like to cast include: What can you do to remain competitive without data? Is data really a new oil? How much is a piece of data worth, and can it be measured?
Abstract: The content shared on social media is among the largest data sets on human behavior in history. I leverage this data to address questions in the psychological sciences. Specifically, I apply natural language processing and machine learning to characterize and measure psychological phenomena with a focus on mental and physical health. For depression, I will show that machine learning models applied to Facebook status histories can predict future depression as documented in the medical records of a sample of patients. For heart disease, the leading cause of death, I demonstrate how prediction models derived from geo-tagged Tweets can estimate county mortality rates better than gold-standard epidemiological models, and at the same time give us insight into the sociocultural context of heart disease. I will also present preliminary findings on my emerging project to measure the subjective well-being of large populations. Across these studies, I argue that AI-based approaches to social media can augment clinical practice, guide prevention, and inform public policy.
Abstract: The content shared on social media is among the largest data sets on human behavior in history. I leverage this data to address questions in the psychological sciences. Specifically, I apply natural language processing and machine learning to characterize and measure psychological phenomena with a focus on mental and physical health. For depression, I will show that machine learning models applied to Facebook status histories can predict future depression as documented in the medical records of a sample of patients. For heart disease, the leading cause of death, I demonstrate how prediction models derived from geo-tagged Tweets can estimate county mortality rates better than gold-standard epidemiological models, and at the same time give us insight into the sociocultural context of heart disease. I will also present preliminary findings on my emerging project to measure the subjective well-being of large populations. Across these studies, I argue that AI-based approaches to social media can augment clinical practice, guide prevention, and inform public policy.
Due to some unforeseen circumstances we regretfully have to cancel this event. We hope to schedule this talk at a later date.
We must separate the technical endeavor of imbuing artificial systems with moral agency from philosophical questions about the ethics of doing so. Both are important ventures but they are distinct and the nature of the latter is dependent on the nature of the former. Failure to separate these two might lead to much wasted time reasoning about the ethics of technologies which will never or could never be created. However, in the context of a specific system engaged in artificial ethics, meaningful philosophical work can be done. To provide a guiding example, this talk sets out an abstract framework for creating an artificial system with moral agency based on dynamic systems theory. The framework itself is speculative but it is plausible and, more importantly, concrete enough to ground philosophical work in. Using that design, philosophical questions are posed directly relevant to that system. Mechanisms by which different people with different value systems might go about answering these questions will be outlined. Finally, this talk is not completely abstract but, in fact, directly informs the speaker’s approach to writing ISO AI standards on ethically building artificially intelligent systems. It concludes with an introduction to how those standards are taking shape and how people interested in the topic can contribute.
Due to some unforeseen circumstances we regretfully have to cancel this event. We hope to schedule this talk at a later date.
We must separate the technical endeavor of imbuing artificial systems with moral agency from philosophical questions about the ethics of doing so. Both are important ventures but they are distinct and the nature of the latter is dependent on the nature of the former. Failure to separate these two might lead to much wasted time reasoning about the ethics of technologies which will never or could never be created. However, in the context of a specific system engaged in artificial ethics, meaningful philosophical work can be done. To provide a guiding example, this talk sets out an abstract framework for creating an artificial system with moral agency based on dynamic systems theory. The framework itself is speculative but it is plausible and, more importantly, concrete enough to ground philosophical work in. Using that design, philosophical questions are posed directly relevant to that system. Mechanisms by which different people with different value systems might go about answering these questions will be outlined. Finally, this talk is not completely abstract but, in fact, directly informs the speaker’s approach to writing ISO AI standards on ethically building artificially intelligent systems. It concludes with an introduction to how those standards are taking shape and how people interested in the topic can contribute.
Most deep learning networks today rely on dense representations. This is in stark contrast to our brains which are extremely sparse. In this talk, Subutai will first discuss what is known about the sparsity of activations and connectivity in the neocortex. He will also summarize new experimental data around active dendrites, branch-specific plasticity, and structural plasticity, each of which has surprising implications for how we think about sparsity. In the second half of the talk, Subutai will discuss how these insights from the brain can be applied to practical machine learning applications. He will show how sparse representations can give rise to improved robustness, continuous learning, powerful unsupervised learning rules, and improved computational efficiency.
Most deep learning networks today rely on dense representations. This is in stark contrast to our brains which are extremely sparse. In this talk, Subutai will first discuss what is known about the sparsity of activations and connectivity in the neocortex. He will also summarize new experimental data around active dendrites, branch-specific plasticity, and structural plasticity, each of which has surprising implications for how we think about sparsity. In the second half of the talk, Subutai will discuss how these insights from the brain can be applied to practical machine learning applications. He will show how sparse representations can give rise to improved robustness, continuous learning, powerful unsupervised learning rules, and improved computational efficiency.
Disruptive new technologies are often heralded for their power to transform industries, increase efficiency, and improve lives. However, emerging technologies such as artificial intelligence and quantum computing don’t just disrupt industries: they disrupt the workforce. Technological innovation creates new jobs and transforms existing roles, resulting in hybrid jobs that fuse skills from disparate fields in unfamiliar ways. Research from Burning Glass finds that over 250 occupations are now highly hybridized, accounting for 1 in 8 job openings. Moreover, disciplines born in the digital age such as data analytics, programming, and cybersecurity are spreading across the economy, forcing firms, training providers, and individuals to keep pace with a dizzying array of new skillsets to manage rapid digital transformation. This seminar will explore Burning Glass’s research on emerging technologies and their impact on the job market, discuss the new foundational skill sets needed in a digital economy, and consider the implications for firms, training providers, policymakers, and individuals as disruptive new technologies introduce new skill needs and rewrite the DNA of the workforce.
Disruptive new technologies are often heralded for their power to transform industries, increase efficiency, and improve lives. However, emerging technologies such as artificial intelligence and quantum computing don’t just disrupt industries: they disrupt the workforce. Technological innovation creates new jobs and transforms existing roles, resulting in hybrid jobs that fuse skills from disparate fields in unfamiliar ways. Research from Burning Glass finds that over 250 occupations are now highly hybridized, accounting for 1 in 8 job openings. Moreover, disciplines born in the digital age such as data analytics, programming, and cybersecurity are spreading across the economy, forcing firms, training providers, and individuals to keep pace with a dizzying array of new skillsets to manage rapid digital transformation. This seminar will explore Burning Glass’s research on emerging technologies and their impact on the job market, discuss the new foundational skill sets needed in a digital economy, and consider the implications for firms, training providers, policymakers, and individuals as disruptive new technologies introduce new skill needs and rewrite the DNA of the workforce.
The Impact of Artificial Intelligence on the Labor Market
Michael developed a new method to predict the impacts of technology on occupations. He used the overlap between the text of job task descriptions and the text of patents to construct a measure of the exposure of tasks to automation. He first applied the method to historical cases such as software and industrial robots.
The Impact of Artificial Intelligence on the Labor Market
Michael developed a new method to predict the impacts of technology on occupations. He used the overlap between the text of job task descriptions and the text of patents to construct a measure of the exposure of tasks to automation. He first applied the method to historical cases such as software and industrial robots.
Van Ton-Quinlivan is a nationally recognized thought leader in workforce development, quoted in The New York Times, Chronicle of Higher Education, Stanford Social Innovation Review, U.S. News & World Report, and other publications. Her career spans the public, private, and non-profit sectors. Most recently, she served as executive vice chancellor for the nation’s largest system of higher education – the California Community Colleges -- and grew public investment in workforce programs from $100M to over $1B during her tenure. Her talk outline current higher education reforms, present provocations on how the future of work may unfold, and highlight where our social structures must evolve for the workforce development challenges ahead. Follow her @WorkforceVan.
Van Ton-Quinlivan is a nationally recognized thought leader in workforce development, quoted in The New York Times, Chronicle of Higher Education, Stanford Social Innovation Review, U.S. News & World Report, and other publications. Her career spans the public, private, and non-profit sectors. Most recently, she served as executive vice chancellor for the nation’s largest system of higher education – the California Community Colleges -- and grew public investment in workforce programs from $100M to over $1B during her tenure. Her talk outline current higher education reforms, present provocations on how the future of work may unfold, and highlight where our social structures must evolve for the workforce development challenges ahead. Follow her @WorkforceVan.
Occupant-Favoring Autonomous Vehicles
Good news! The near future has arrived and you’re ready to purchase your first fully autonomous vehicle. You have narrowed down your search to a few manufacturers and have just one decision left to make: How would you like your vehicle to respond if it finds itself in a potential collision with other autonomous vehicles? If, like most people, you care more about your own safety and that of your friends and family than you care about the safety of strangers on the road, you will understandably be drawn to a vehicle that is programmed to be, at least to some degree, occupant-favoring. Such a vehicle would tend to select courses of action that reduce harm to its own passengers in a crash, even when doing so means that a greater harm will befall the occupants of other vehicles. Because most consumers are like you, occupant-favoring vehicles will soon dominate the market if they are not regulated. In this talk, which draws on a joint project with Tomi Francis (Oxford), Todd Karhu will discuss reasons for and against a regulatory ban on occupant-favoring vehicles, including the possibility that if no passengers are allowed to operate occupant-favoring vehicles, every passenger will be safer than if all do.
Occupant-Favoring Autonomous Vehicles
Good news! The near future has arrived and you’re ready to purchase your first fully autonomous vehicle. You have narrowed down your search to a few manufacturers and have just one decision left to make: How would you like your vehicle to respond if it finds itself in a potential collision with other autonomous vehicles? If, like most people, you care more about your own safety and that of your friends and family than you care about the safety of strangers on the road, you will understandably be drawn to a vehicle that is programmed to be, at least to some degree, occupant-favoring. Such a vehicle would tend to select courses of action that reduce harm to its own passengers in a crash, even when doing so means that a greater harm will befall the occupants of other vehicles. Because most consumers are like you, occupant-favoring vehicles will soon dominate the market if they are not regulated. In this talk, which draws on a joint project with Tomi Francis (Oxford), Todd Karhu will discuss reasons for and against a regulatory ban on occupant-favoring vehicles, including the possibility that if no passengers are allowed to operate occupant-favoring vehicles, every passenger will be safer than if all do.
According to the World Health Organization, more than one billion people worldwide have disabilities. The field of disability studies defines disability through a social lens, which considers people disabled to the extent that society creates accessibility barriers. AI technologies offer the possibility of removing many accessibility barriers. For example, computer vision might give people who are blind a better sense of the visual world, speech recognition and translation technologies might offer real-time captioning for people who are hard of hearing, and new robotic systems might augment the capabilities of people with mobility restrictions. Considering the needs of users with disabilities can help technologists identify high-impact challenges whose solutions can advance the state of AI for all users. At the same time, ethical challenges such as inclusion, bias, privacy, error, expectation setting, simulated data, and social acceptability must be considered. In this lecture, I will define these seven challenges, provide examples of how they relate to AI for Accessibility technologies, and discuss future considerations in this space.
According to the World Health Organization, more than one billion people worldwide have disabilities. The field of disability studies defines disability through a social lens, which considers people disabled to the extent that society creates accessibility barriers. AI technologies offer the possibility of removing many accessibility barriers. For example, computer vision might give people who are blind a better sense of the visual world, speech recognition and translation technologies might offer real-time captioning for people who are hard of hearing, and new robotic systems might augment the capabilities of people with mobility restrictions. Considering the needs of users with disabilities can help technologists identify high-impact challenges whose solutions can advance the state of AI for all users. At the same time, ethical challenges such as inclusion, bias, privacy, error, expectation setting, simulated data, and social acceptability must be considered. In this lecture, I will define these seven challenges, provide examples of how they relate to AI for Accessibility technologies, and discuss future considerations in this space.