Skip to main content Skip to secondary navigation
Page Content

Intelligence Research Mission

Current AI systems lack flexibility and contextual understanding, and resist explanation in terms comprehensible by humans. Ultimately we need to develop machine intelligence that understands human language, emotions, intentions, behaviors, and interactions at multiple scales. 

Today’s AI methods can perform simple, well-defined, narrow tasks well, but only after training on laboriously annotated data. While recent algorithms have enabled us to solve formerly intractable real-world problems, it remains to be seen how far they can go, and whether they can ultimately serve as the basis for a general theory of intelligence and the development of truly intelligent machines. 

To create a machine-assisted — yet human-centered — world, we must develop the next generation of AI techniques that overcomes the limitations of current algorithms, expands the class of problems that can be addressed, and complements human cognitive and analytic styles. Ultimately we need machine intelligence that leads to good decisions, either acting alone or working in combination with human decision-makers. It should understand human language, emotions, intentions, behaviors, and interactions at multiple scales. 

Tackling these challenges on both the theoretical and practical levels requires substantial fundamental research. Developing a next generation of human-centered machine intelligence will demand combining further research in core machine learning and artificial intelligence with approaches coming from our growing understanding of human intelligence developed in areas including neuroscience and cognitive science.

Additional Research Areas

Augment Human Capabilities  Human Impact

Sample Research Projects

Adversarial Examples for Humans?

Gregory Valiant and Noah Goodman

Humans are generally regarded as the gold-standard for robust perception and classification, and the implicit assumption in much of the work on “adversarial examples” is that humans do not have such vulnerabilities. We hope to understand whether there are settings where nearly every natural input (e.g., pertaining to vision, speech recognition, perception, etc.) can be turned into an “illusion”. We hope that this work will yield significant insights into aspects of human perception; the results may have implications for security and safety.

Free Exploration in Human-Centered AI Systems

Mohsen Bayati and Ramesh Johari

All systems that learn from their environment must grapple with a tradeoff between making decisions that maximize current rewards ( "exploitation") and decisions that are likely to teach the system about the environment and thus potentially increase future rewards ("exploration"). Automated machine learning and AI systems leverage techniques to balance exploration and exploitation to maximize rewards over time. This dynamic becomes more complicated at the interface between ML systems and humans. Our goal is to develop a formal methodology for reasoning about the free exploration provided by humans who interact with machine learning and AI systems.

More from HAI