Skip to main content Skip to navigation

EECS Colloquium: Explaining AI to People: Proposing then Evaluating Explanations, Processes, and Tasks — Jonathan Dodge, Oregon State University

Online
ZOOM

About the event

Abstract: How should AI systems explain themselves to people trying to decide whether to believe AI decisions? How should we evaluate how good the AI explanations are? To answer these questions, we journeyed far and wide; learning from explanation experts like esports commentators (shoutcasters), proposing novel explanations in a variety of styles for diverse domains, devising processes in which to use the explanation, and evaluating explanations using both qualitative and quantitative studies. This talk will focus on three stops in our journey.  In the first, generated and evaluated textual explanations to reveal fairness issues in automated sentencing/bail decisions. In the second, we created a new AI assessment process (AAR/AI) based on After-Action Review to scaffold people’s ability to make sense of the explanations. In the third, we brought new XAI evaluation devices to researchers: a new task for empirically evaluating the quality of explanations (Ranking Agents); and a new strategy, inspired by software engineering’s Mutation Testing, to create agents whose variations are controllable by the XAI researcher (Mutant Agent Generation). We conclude with a preview of our continuing journey to improve explainability of AI.

Bio: Jonathan Dodge is a Ph.D. candidate at Oregon State University, advised by Dr. Margaret Burnett. He received his M.S. (2009) in Computer Graphics from Oregon State University, and his B.S. (2006) from Harvey Mudd College. His research approaches eXplainable AI (XAI) problems from both the human perspective and the system side. In particular, this broad perspective has required working on the entire pipeline for explanations, all the way from qualitative formative research and AI system creation to quantitative evaluation of explanations with human user study participants; plus everything in between.

 

Contact