2023 Artificial Intelligence Index Report
The AI Index Report tracks, collates, distills, and visualizes data related to artificial intelligence.
Research Focus Areas
AI is a general purpose technology that can be used for good and for ill. Our vision for the future is led by the commitment to promote human-centered uses of AI, and ensure that humanity benefits from the technology and that the benefits are broadly shared. In support of these goals, our research falls into three key focus areas: Human Impact, Augment Human Capabilities, and Intelligence.
Current AI systems lack flexibility and contextual understanding, and resist explanation in terms comprehensible by humans. Ultimately we need to develop machine intelligence that understands human language, emotions, intentions, behaviors, and interactions at multiple scales.
Today’s AI methods can perform simple, well-defined, narrow tasks well, but only after training on laboriously annotated data. While recent algorithms have enabled us to solve formerly intractable real-world problems, it remains to be seen how far they can go, and whether they can ultimately serve as the basis for a general theory of intelligence and the development of truly intelligent machines.
To create a machine-assisted — yet human-centered — world, we must develop the next generation of AI techniques that overcomes the limitations of current algorithms, expands the class of problems that can be addressed, and complements human cognitive and analytic styles. Ultimately we need machine intelligence that leads to good decisions, either acting alone or working in combination with human decision-makers. It should understand human language, emotions, intentions, behaviors, and interactions at multiple scales.
Tackling these challenges on both the theoretical and practical levels requires substantial fundamental research. Developing a next generation of human-centered machine intelligence will demand combining further research in core machine learning and artificial intelligence with approaches coming from our growing understanding of human intelligence developed in areas including neuroscience and cognitive science.
HAI seeks to develop new human-centered design methods and tools so that AI agents and applications are designed and created with the ability to communicate with, collaborate with, and augment people more effectively, and to make their work better and more enjoyable. These breakthroughs will allow great progress in healthcare, education, sustainability, automation, and countless other domains.
AI has the potential to replace people in their jobs. But AI also has the potential to educate, train, and augment people, making them better at their tasks and activities. AI can make the quality of an individual’s work better, resulting in better writing, design, healthcare, communication, teaching, and art.
People are social animals; machines are not. To achieve broad acceptance, AI systems must conform to the often-implicit cultural conventions that underlie human interaction and communication. When should such systems “listen” and when should they “speak up”? If they require a shared resource, how can they balance their own needs with those of others? If humans are asked to rely on machine guidance to augment their decisions (and perhaps override their intuition), they may need to understand the strengths and weaknesses of the AI.
The advances and considerations developed in our other areas of focus, in addition to research in design methods, will help us to create systems that have these more appropriate communication capabilities. This underlying research will be combined with the use of AI in important application domains, such as education, healthcare, and sustainability, where the new design methods and tools can be leveraged and evaluated.
To develop equitable and trustworthy technology, we must understand how AI interacts with humans as well as with vital social structures and institutions.
HAI’s multidisciplinary research on AI’s human impact aims to realize this vision. AI scientists, working alongside scholars across Stanford and other academic institutions, can take us far beyond superficial generalizations about “human vs. machine.” Through deeper understanding, we can better address the myriad issues society will confront as AI systems become commonplace.
Scholars are currently studying the extent to which algorithms introduce, compound, or mitigate biases and risk; “responsibility gaps” between decisions made by machines and people; the use and misuse of AI for surveillance, population control, and waging war; and the impact of AI on social institutions, judicial systems, government, industry structure, labor markets, economic growth, and trade across nations. This research will inform engagement with industry, government, and civil society to help guide AI’s development.
Featured HAI Research
37 research teams receive a total of $3 million for innovative AI projects
This year’s winners propose innovative, bold ideas pushing the boundaries of artificial intelligence.
HAI Research in the News
Generative AI Boosts Worker Productivity 14% in First Real-World Study
Bloomberg | April 23, 2023
Annual corporate investment in AI is 13 times greater than a decade ago
QUARTZ | April 7, 2023
Black Americans Are Much More Likely to Face Tax Audits
New York Times | January 31, 2023
The perils of machine learning in designing new chemicals and materials
Nature Machine Intelligence | April 25, 2022
If you have research questions, please contact Vanessa Parli, Director of Research.