Explaining Explainable AI

In recent years there has been an explosive expansion in research on eXplainable AI (XAI), with the professed aim to make AI systems more transparent, understandable and fairer.   XAI attempts to use AI to explain AI, employing a variety of explanation strategies to justify AI decision-making and to allay public disquiet about the expanding deployment of AI in everyday life.  

This talk reviews recent work by the UCD group on developing explanation strategies using case-based, counterfactual and semi-factual post-hoc explanations for a wide variety of data-types (from tabular data, to images and time-series). This work combines algorithmic development of explanation methods along with their user testing,in an attempt to assess the extent to which people can come to understand the decisions of these AI systems.  A major theme of the work is the extent to which XAI does not adequately consider users, how little is know about their  psychology in these contexts and how, ironically, these explanation methods could be unethically exploited by AI businesses to mislead users in the pursuit of commercial gain.

Target audience:

  • Data Scientists
  • Managers, Business Leaders, and Decision Makers for small, medium, and large business.
Mark Keane

Mark Keane

Chair of Computer Science, UCD

Find out more