Practical applications of explainable artificial intelligence methods (PRAXAI)

PRAXAI webpage address is

PRAXAI special session at The 8th IEEE International Conference on Data Science and Advanced Analytics focuses on bringing the research on Explainable Artificial Intelligence (XAI) to actual applications and tools that help to better integrate them as a must-have step in every AI pipeline. We welcome papers that showcase how XAI has been successfully applied in real-world AI-based tasks, helping domain experts understand the results of a model. Moreover, we also encourage the submission of novel techniques to augment and visualize the information contained in the model explanations. Furthermore, we expect presentation of practical development tools that make it easier for AI practitioners to integrate XAI methods into their daily work.

The PRAXAI 2021 session is related to the CHIST-ERA PACMEL project.

Important Dates

  • Submission Deadline: May 23, 2021 June 06 2021 (extended)
  • Notification: July 25, 2021
  • Camera Ready Due: August 8, 2021

Call for papers

Submission Instructions

Papers (maximum of ten (10) pages) can be submitted through CMT:

Submissions to this special session strictly follow the same specifications, requirements, and policies as the main conference submissions in terms of the paper submission deadline, notification deadline, paper formatting and length, and important policies.

Papers must be submitted in PDF according to the standard 2-column U.S. letter style IEEE Conference template. See the IEEE Proceedings Author Guidelines: for further information and instructions. Submissions failing to comply with paper formatting and authors anonymity will be rejected without reviews.

All submissions will be double blind reviewed on the basis of technical quality, relevance to the special session's topics of interest, originality, significance, and clarity. Accepted full-length special session papers will be published by IEEE in the DSAA main conference proceedings under its Special Session scheme. All papers will be submitted for inclusion in the IEEEXplore Digital Library. The conference proceedings will be submitted for EI indexing through INSPEC by IEEE.

Attendance: At least one author of each accepted paper must register in full and attend the conference to present the paper. No-show papers will be removed from the IEEE Xplore proceedings. See the DSAA 2021 registration page for details.

Aim and scope

Explainable Artificial Intelligence (XAI) has become an inherent component of data mining (DM) and machine learning (ML) pipelines in the areas where the insight into the decision process of an automated system is important.

Although explainability (or intelligibility) is not a new concept in AI, it has been most extensively developed over the last decade focusing mostly on explaining black-box models. Many successful frameworks were developed such as LIME, SHAP, LORE, Anchor, GradCam, DeepLift and others that aim at providing explanations and transparency to decisions made by machine learning models.

However, artificial intelligence systems in real-life applications are rarely composed of a single machine learning model, but rather are formed by a number of components orchestrated to work together for solving selected goals. Similarly, explainability itself is a very broad concept that goes beyond explanation of machine learning algorithms, being more of a property of a system as a whole. Thus, the goal of the XAI methods is not simply to provide an explanation of a decision made by a ML model, but use this explanation to achieve goals that are related to the primary goal of a system as a whole by improving its transparency, accountability, and interpretability. We believe that these properties can be achieved (and should be whenever possible) by using interpretable models, knowledge-based explanations and human-in-the-loop interactive explanations (mediations). Explanations should be built in a context-aware manner that takes into consideration not only the goal of the system, but also the end-user of the explanation and the characteristic of the data.

Therefore, in this special session we focus on works that apply different paradigms of XAI as means of solving particular problems in many different domains such as manufacturing, healthcare, planning, decision making, etc. Each of these domains use different types of data, which require different techniques to display the model explanations properly. In this regard, it is common to find heatmaps on top of images highlighting the most important pixels for the model prediction, but the analogous for other types of data such as tabular data, time series or graphs is not so well studied. Thus, works that describe visual integrations of model explanations for other types of data rather than images and language will also be of interest in the session.

We also focus on application of XAI methods in the machine learning/data mining pipeline in order to aid data scientists in building better AI systems. Such applications include, but are not limited to: feature engineering with XAI, feature and model selection with XAI, evaluation and visualization of ML/DM training process with XAI. Finally, we are also interested in the development of tools that integrate in a transparent and easy way the use of XAI methods, within the current popular machine & deep learning libraries.

Topics of interest

  • Model explanations verbalized in human-comprehensible natural language
  • Explainable Reinforcement learning
  • Explainable AI for Planning and Decision Making
  • Medical XAI
  • Evaluation and visualization of explanations
  • Software tools for interpretable machine learning and data mining
  • Knowledge-augmented explanations
  • Human in the loop explanations
  • Interactive explanations
  • Visualization of model explanations for different types of data apart from language and images (tabular data, time series, graphs…)
  • XAI software development and its integration into popular ML/DL libraries

Program Committee (tentative)

  • Javier del Ser, Tecnalia, Spain
  • Eneko Osaba, Tecnalia, Spain
  • Ricardo Aler, Universidad Carlos III de Madrid, Spain
  • Benslimane Djamal, Lyon 1 University, France
  • Boyan Xu, Guangdong University of Technology, China
  • Juan Pavón, Universidad Complutense de Madrid
  • Héctor Menéndez, Middlesex University London, UK
  • Gema Bello, Universidad Politécnica de Madrid, Spain
  • Angel Panizo, Universidad Politécnica de Madrid, Spain
  • Alejandro Martín, Universidad Politécnica de Madrid, Spain
  • Javier Huertas, Universidad Politécnica de Madrid, Spain
  • Cristian Ramirez, Universidad Politécnica de Madrid, Spain
  • Martin Atzmueller, Universitat Osnabruck, Germany
  • Kacper Sokol, University of Bristol, UK
  • Sławomir Nowaczyk, Halmstad University, Sweden
  • Jerzy Stefanowski, Poznan University of Technology, Poland
  • Jose Palma, University of Murcia, Spain
  • Michal Choras, UTP University of Science and Technology, Poland
  • Boguslaw Cyganek, AGH University of Science and Technology in Krakow, Poland



The session took place on 07.10.2021, online, 4-5pm CET

  • Constructing Global Coherence Representations: Identifying Interpretability and Coherences of Transformer Attention in Time Series Data, Speaker: Leonid Schwenke
  • Explainable clustering with multidimensional bounding boxes, Speaker: Michał Kuk
  • Explainable artificial intelligence for data science on customer churn, Speaker: Carson Leung
  • Explaining Multimodal Errors in Autonomous Vehicles, Speaker: Leilani Gilpin
praxai/start2021.txt · Last modified: 2021/10/07 14:41 by gjn
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0