AAAI 2023 Tutorial

Explainable AI:

On Explainable AI: From Theory to Motivation, Industrial Applications, XAI Coding & Engineering Practices
Half-day (3 hours) Tutorial
Tuesday, February 7th, 2023 (TBC)
2:00 PM – 6:00 PM (EST / New York Time) (TBC)
Room: Virtual (TBC)/ Physical Room (201)
YouTube Link (TBD)
Slides (due Feb 7th 2023) - Coding Materials (due Feb 7th 2023)

Overview

The future of AI lies in enabling people to collaborate with machines to solve complex problems. Like any efficient collaboration, this requires good communication, trust, clarity and understanding. XAI (eXplainable AI) aims at addressing such challenges by combining the best of symbolic AI and traditional Machine Learning. Such topic has been studied for years by all different communities of AI, with different definitions, evaluation metrics, motivations and results.

This tutorial is a snapshot on the work of XAI to date, and surveys the work achieved by the AI community with a focus on machine learning and symbolic AI related approaches (given the half-day format). We will motivate the needs of XAI in real-world and large-scale applications, while presenting state-of-the-art techniques, with best XAI coding and engineering practices. In the first part of the tutorial, we give an introduction to the different aspects of explanations in AI. We then focus the tutorial on two specific approaches: (i) XAI using machine learning and (ii) XAI using a combination of graph-based knowledge representation and machine learning. For both we get into the specifics of the approach, the state of the art and the research challenges for the next steps. The final part of the tutorial gives an overview of real-world ap-plications of XAI as well as best XAI coding and engineering practices, as XAI technologies are required to be seamlessly integrated in AI applications.

Outline

Part I: Introduction, Motivation & Evaluation - 20 minutes

Broad-spectrum introduction on explanation in AI. This will include describing and motivating the need for explainable AI techniques, from both theoretical and applicative standpoints. In this part we also summarize the prerequisites, and we introduce the different angles taken by the rest of the tutorial.

Part II: Explanation in AI (not only Machine Learning!) - 40 minutes

General overview of explanation in various field of AI (optimization, knowledge representation and reasoning, machine learning, search and constraint optimization, planning, natural language processing, robotics and vision) to align everyone on the various definitions of explanation. Evaluation of explainability will be also covered. The tutorial will cover most of definitions but will only go deep in the following areas: (i) Explainable Machine Learning, (ii) Explainable AI with Knowledge Graphs and Machine Learning.

Part III: Explanation for Deep Neural Networks - 40 minutes

In this section of the tutorial we address the challenge of explaining deep neural networks: from models consuming image, text and time series.

Part IV: On The Role of Knowledge Graphs in Explainable Machine Learning - 40 minutes

In this section of the tutorial we address the explanatory power of combining graph-based knowledge bases with machine learning approaches.

Part V: XAI Applications and Lessons Learnt - 40 minutes

We will review some XAI open source and commercial tools applied in real-world examples. We describe how XAI could be instantiated based on the technical and business challenge. In particular we focus on a number of use cases: (1) explaining object detection, (2) explaining obstacle detection for autonomous trains, (3) explaining flight performance, (4) an interpretable flight delay prediction system, with built-in explanation capabilities, (5) a wide-scale contract management system that predicts and explains the risk tier of corporate projects with semantic reasoning over knowledge graphs, (6) an expenses system that identifies, explains, and predict abnormal expense claims by employees of large organizations in 500+ cities, (7) an explanation system for credit decisions, (8) an explanation system for medical conditions, as well as 8 other use cases in industry.

Part VI: XAI Tools, Coding & Engineering Practices Conclusion, and Research Challenges - 40 minutes

We go through XAI coding & engineering practices by demonstrating how XAI could be integrated, and tested. This section will go through development codes, which are shared with Google Colab for easy interaction with the AAAI audience. A Google account (to access Google Colab) is required for this section.

Schedule

Part I: Introduction and Motivation - 20 minutes

[2:00pm - 2:20pm Pacific Time] (Confirmed)

Part II: Explanation in AI (not only Machine Learning!) - 40 minutes

[2:20pm - 3:00pm Pacific Time] (Confirmed)

Part III: Explanation for Deep Neural Networks - 40 minutes

[3:00pm - 3:40pm Pacific Time] (Confirmed)

Break - 20 minutes

[3:40pm - 4:00pm Pacific Time] (Confirmed)

Part IV: On The Role of Knowledge Graphs in Explainable Machine Learning - 40 minutes

[4:00pm - 4:40pm Pacific Time] (Confirmed)

Part V: XAI Applications and Lessons Learnt - 40 minutes

[4:40pm - 5:20pm Pacific Time] (Confirmed)

Part VI: XAI Tools, Coding and Engineering Practices Conclusion, and Research Challenges - 40 minutes

[5:20pm - 6:00pm Pacific Time] (Confirmed)

Presenters

Freddy Lecue

Freddy Lecue (PhD 2008, Habilitation 2015) is an Artificial Intelligence (AI) Research Director at J.P. Morgan in New York, USA since August 2022. He is also a research associate at INRIA, in WIMMICS, Sophia Antipolis, France. He was the Chief AI Scientist at CortAIx (Centre of Research & Technology in Artificial Intelligence eXpertise), Thales in Montreal, Canada from January 2019 till August 2022. Before joining Thales he was principal scientist and research manager in Artificial Intelligent systems, systems combining learning and reasoning capabilities, in Accenture Technology Labs, Dublin - Ireland. Before joining Accenture Labs, he was a Research Scientist at IBM Research, Smarter Cities Technology Center (SCTC) in Dublin, Ireland, and lead investigator of the Knowledge Representation and Reasoning group. His main research interests are Explainable AI systems. The application domain of his current research is Smarter Cities, with a focus on Smart Transportation and Building. In particular, he is interested in exploiting and advancing Knowledge Representation and Reasoning methods for representing and inferring actionable insight from large, noisy, heterogeneous and big data. He has over 50 publications in refereed journals and conferences related to Artificial Intelligence (AAAI, ECAI, IJCAI, IUI) and Semantic Web (ESWC, ISWC), all describing new system to handle expressive semantic representation and reasoning. He co-organized the first three workshops on semantic cities (AAAI 2012, 2014, 2015, IJCAI 2013), and the first two tutorial on smart cities at AAAI 2015 and IJCAI 2016. Prior to joining IBM, Freddy Lecue was a Research Fellow (2008-2011) with the Centre for Service Research at The University of Manchester, UK. He has been awarded by a second prize for his Ph.D thesis by the French Association for the Advancement of Artificial Intelligence in 2009, and has been recipient of the Best Research Paper Award at the ACM/IEEE Web Intelligence conference in 2008.

Pasquale Minervini

Pasquale Minervini is a Lecturer in Natural Language Processing at the School of Informatics, University of Edinburgh. Previously, he was a Senior Research Fellow at UCL (2017-2022); a postdoc at the INSIGHT Centre for Data Analytics, Ireland (2016); and a postdoc at the University of Bari, Italy (2015). His research interests are in NLP and ML, with a focus on relational learning and learning from graph-structured data, solving knowledge-intensive tasks, hybrid neuro-symbolic models, compositional generalisation, and designing data-efficient and robust deep learning models. Pasquale published over 60 peer-reviewed papers in top-tier AI conferences, receiving multiple awards (including one Outstanding Paper Award at ICLR 2021), and delivered several tutorials on Explainable AI and relational learning (including four AAAI tutorials). On behalf of the University of Edinburgh and UCL, he is the Principal Investigator (PI) of the EU Horizon 2020 research grant CLARIFY -- Cancer Long Survivors Artificial Intelligence Follow Up, the Edinburgh Laboratory on Integrated Artificial Intelligence (ELIAI) grant Gradient-based Learning of Complex Latent Structures, and multiple industry grants and donations. In 2020, his team won two tracks out of three of the Efficient Open-Domain Question Answering Challenge at NeurIPS 2020. He routinely collaborates with researchers across both academia and industry. For more information about him, check his website http://www.neuralnoise.com

Riccardo Guidotti

Riccardo Guidotti is currently a post-doc researcher at the Department of Computer Science University of Pisa, Italy and a member of the Knowledge Discovery and Data Mining Laboratory (KDDLab), a joint research group with the Information Science and Technology Institute of the National Research Council in Pisa. Riccardo Guidotti was born in 1988 in Pitigliano (GR) Italy. In 2013 and 2010 he graduated cum laude in Computer Science (MS and BS) at University of Pisa. He received the PhD in Computer Science with a thesis on Personal Data Analytics in the same institution. He won the IBM fellowship program and has been an intern in IBM Research Dublin, Ireland in 2015. His research interests are in personal data mining, clustering, explainable models, analysis of transactional data related to recipes and to migration flows.

Fosca Giannotti

Fosca Giannotti is Director of Research at the Information Science and Technology Institute “A. Faedo” of the National Research Council, Pisa, Italy. Fosca Giannotti is a scientist in Data mining and Machine Learning and Big Data Analytics. Fosca leads the Pisa KDD Lab - Knowledge Discovery and Data Mining Laboratory http://kdd.isti.cnr.it, a joint research initiative of the University of Pisa and ISTI-CNR, founded in 1994 as one of the earliest research lab centered on data mining. Fosca’s research focus is on social mining from big data: human dynamics, social networks, diffusion of innovation, privacy enhancing technology and explainable AI. She has coordinated tens of research projects and industrial collaborations. Fosca is now the coordinator of SoBigData, the European research infrastructure on Big Data Analytics and Social Mining, an ecosystem of ten cutting edge European research centres providing an open platform for interdisciplinary data science and data-driven innovation http://www.sobigdata.eu. In 2012-2015 Fosca has been general chair of the Steering board of ECML-PKDD (European conference on Machine Learning) and is currently member of the steering committee EuADS (European Association on Data Science) and of the AIIS: Italian Lab. of Artificial Intelligence and Autonomous Systems.