Practical applications of explainable artificial intelligence methods (PRAXAI)

PRAXAI webpage address is http://praxai.geist.re

PRAXAI special session at The 9th IEEE International Conference on Data Science and Advanced Analytics focuses on bringing the research on Explainable Artificial Intelligence (XAI) to actual applications and tools that help to better integrate them as a must-have step in every AI pipeline. We welcome papers that showcase how XAI has been successfully applied in real-world AI-based tasks, helping domain experts understand the results of a model. Moreover, we also encourage the submission of novel techniques to augment and visualize the information contained in the model explanations. Furthermore, we expect presentation of practical development tools that make it easier for AI practitioners to integrate XAI methods into their daily work.

The PRAXAI 2022 session is related to the CHIST-ERA XPM project.

Important Dates

  • Submission Deadline: June 1, 2022 June 15, 2022
  • Notification: July 31 August 10th, 2022
  • Camera Ready Due: August 15 August 25, 2022 September 6, 2022

Call for papers

Program

Zoom Link: Connect

PRAXAI Session A: 14:00-15:20 (CET) Session Chair: dr Szymon Bobek:

  1. Fast Hybrid Oracle-Explainer Approach to Explainability Using Optimized Search of Comprehensible Decision Trees
    Szczepanski, Mateusz; Pawlicki, Marek; Kozik, Rafal; Choras, Michal
    1. Video: Watch
  2. SurvSHAP: A Proxy-Based Algorithm for Explaining Survival Models with SHAP
    Alabdallah, Abdallah; Pashami, Sepideh; Rognvaldsson, Thorsteinn; Ohlsson, Mattias
    1. Video: Watch
  3. Abstract Argumentation for Explainable Satellite Scheduling
    Powell, Cheyenne; Riccardi, Annalisa
    1. Wideo: Watch
  4. Explaining Human Activities Instances Using Deep Learning Classifiers
    Arrotta, Luca; Civitarese, Gabriele; Fiori, Michele; Bettini, Claudio
    1. Wideo: Watch

Break (10 minutes)

PRAXAI Session B: 15:30-16:50 (CET) Session Chair: dr Victor Victor Rodriguez-Fernandez:

  1. Explainable expected goal models for performance analysis in football analytics
    Cavus, Mustafa; Biecek, Przemyslaw
    1. Wideo: Watch
  2. Why is the prediction wrong? Towards underfitting case explanation via meta-classification
    ZHOU, Sheng; BLANCHART, Pierre; Crucianu, Michel; Ferecatu, Marin
  3. Streamlining models with explanations in the learning loop
    Lomuscio, Francesco; Bajardi, Paolo; Perotti, Alan; Amparore, Elvio
  4. Roll Wear Prediction in Strip Cold Rolling with Physics-Informed Autoencoder and Counterfactual Explanations
    Jakubowski, Jakub; Stanisz, Przemysław; Bobek, Szymon; Nalepa, Grzegorz
    1. Video: Watch

Submission Instructions

The length of each paper submitted to the Research and Application tracks should be no more than 10 pages, whereas the maximum number of pages is 2 for each abstract submitted to the Poster and Journal track. Both types of papers should be formatted following the standard 2-column U.S. letter style of IEEE Conference template. See the IEEE Proceedings Author Guidelines: http://www.ieee.org/conferences_events/conferences/publishing/templates.html, for further information and instructions.

All submissions will be double-blind reviewed by the Program Committee on the basis of technical quality, relevance to the scope of the conference, originality, significance, and clarity. The names and affiliations of authors must not appear in the submissions, and bibliographic references must be adjusted to preserve author anonymity. Submissions failing to comply with paper formatting and authors anonymity will be rejected without reviews.

Authors are also encouraged to submit supplementary materials, i.e., providing the source code and data through a GitHub-like public repository to support the reproducibility of their research results.

Electronic submission site: https://cmt3.research.microsoft.com/DSAA2022

Aim and scope

Explainable Artificial Intelligence (XAI) has become an inherent component of data mining (DM) and machine learning (ML) pipelines in the areas where the insight into the decision process of an automated system is important.

Although explainability (or intelligibility) is not a new concept in AI, it has been most extensively developed over the last decade focusing mostly on explaining black-box models. Many successful frameworks were developed such as LIME, SHAP, LORE, Anchor, GradCam, DeepLift and others that aim at providing explanations and transparency to decisions made by machine learning models.

However, artificial intelligence systems in real-life applications are rarely composed of a single machine learning model, but rather are formed by a number of components orchestrated to work together for solving selected goals. Similarly, explainability itself is a very broad concept that goes beyond explanation of machine learning algorithms, being more of a property of a system as a whole. Thus, the goal of the XAI methods is not simply to provide an explanation of a decision made by a ML model, but use this explanation to achieve goals that are related to the primary goal of a system as a whole by improving its transparency, accountability, and interpretability. We believe that these properties can be achieved (and should be whenever possible) by using interpretable models, knowledge-based explanations and human-in-the-loop interactive explanations (mediations). Explanations should be built in a context-aware manner that takes into consideration not only the goal of the system, but also the end-user of the explanation and the characteristics of the data.

Therefore, in this special session we focus on works that apply different paradigms of XAI as a means of solving particular problems in many different domains such as manufacturing, healthcare, planning, decision making, etc. Each of these domains use different types of data, which require different techniques to display the model explanations properly. In this regard, it is common to find heatmaps on top of images highlighting the most important pixels for the model prediction, but the analogous for other types of data such as tabular data, time series or graphs is not so well studied. Thus, works that describe visual integrations of model explanations for other types of data rather than images and language will also be of interest in the session.

We also focus on application of XAI methods in the machine learning/data mining pipeline in order to aid data scientists in building better AI systems. Such applications include, but are not limited to: feature engineering with XAI, feature and model selection with XAI, evaluation and visualization of ML/DM training process with XAI. Finally, we are also interested in the development of tools that integrate in a transparent and easy way the use of XAI methods, within the current popular machine & deep learning libraries.

Topics of interest

  • Model explanations verbalized in human-comprehensible natural language
  • Explainable Reinforcement learning
  • Explainable AI for Planning and Decision Making
  • Automation and XAI
  • Cybersecurity and XAI
  • Medical XAI
  • Evaluation and visualization of explanations
  • Software tools for interpretable machine learning and data mining
  • Knowledge-augmented explanations
  • Human in the loop explanations
  • Interactive explanations
  • Visualization of model explanations for different types of data apart from language and images (tabular data, time series, graphs, etc.)
  • XAI software development and its integration into popular ML/DL libraries

Program Committee (tentative)

  • Javier del Ser, Tecnalia
  • Ricardo Aler, Universidad Carlos III de Madrid
  • Felix José Fuentes Hurtado, Universidad Politécnica de Valencia
  • Juan Pavón, Universidad Complutense de Madrid
  • Francesco Piccialli, University of Naples Federico II
  • Salvatore Cuomo, University of Naples Federico II
  • Edoardo Prezioso, University of Naples Federico II
  • Federico Gatta, University of Naples Federico II
  • Fabio Giampaolo, University of Naples Federico II
  • Stefano Izzo, University of Naples Federico II
  • Martin Atzmueller, Universitat Osnabruck
  • Kacper Sokół, University of Bristol
  • Sławomir Nowaczyk, Halmstad University
  • Michal Choras, UTP University of Science and Technology
  • Boguslaw Cyganek, AGH University of Science and Technology in Krakow
  • Timos Kipouros, University of Cambridge
  • Jerzy Stefanowski, Poznan University of Technology, Poland

Organizers

Papers

  • Fast Hybrid Oracle-Explainer Approach to Explainability Using Optimized Search of Comprehensible Decision Trees
    Szczepanski, Mateusz; Pawlicki, Marek; Kozik, Rafal; Choras, Michal
  • SurvSHAP: A Proxy-Based Algorithm for Explaining Survival Models with SHAP
    Alabdallah, Abdallah; Pashami, Sepideh; Rognvaldsson, Thorsteinn; Ohlsson, Mattias
  • Abstract Argumentation for Explainable Satellite Scheduling
    Powell, Cheyenne; Riccardi, Annalisa
  • Explaining Human Activities Instances Using Deep Learning Classifiers
    Arrotta, Luca; Civitarese, Gabriele; Fiori, Michele; Bettini, Claudio
  • Explainable expected goal models for performance analysis in football analytics
    Cavus, Mustafa; Biecek, Przemyslaw
  • Why is the prediction wrong? Towards underfitting case explanation via meta-classification
    ZHOU, Sheng; BLANCHART, Pierre; Crucianu, Michel; Ferecatu, Marin
  • Streamlining models with explanations in the learning loop
    Lomuscio, Francesco; Bajardi, Paolo; Perotti, Alan; Amparore, Elvio
  • Roll Wear Prediction in Strip Cold Rolling with Physics-Informed Autoencoder and Counterfactual Explanations
    Jakubowski, Jakub; Stanisz, Przemysław; Bobek, Szymon; Nalepa, Grzegorz

Past events

praxai/start2022.txt · Last modified: 2022/10/24 12:33 by sbk
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0