Explainable AI (XAI) are methods and techniques that make AI systems' decisions and predictions understandable and interpretable to humans, rather than operating as opaque "black boxes." Techniques include feature importance rankings, visualization tools, attention mechanisms, and simplified model approximations that help users understand which inputs most influenced the AI's output and the reasoning process behind decisions.
Get the latest news, advances in research, policy work, and education program updates from HAI in your inbox weekly.
Sign Up For Latest News
Explore Similar Terms:

A Stanford researcher advocates for clarity about the different types of interpretability and the contexts in which it is useful.
A Stanford researcher advocates for clarity about the different types of interpretability and the contexts in which it is useful.


Stanford researchers show that shifting the cognitive costs and benefits of engaging with AI explanations could result in fewer erroneous decisions due to AI overreliance.
Stanford researchers show that shifting the cognitive costs and benefits of engaging with AI explanations could result in fewer erroneous decisions due to AI overreliance.
