Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
aira:start [2026/03/13 11:03] – [2026-03-12] mzkaira:start [2026/03/19 16:26] (current) – [Schedule Spring 2026] mtm
Line 22: Line 22:
 ===== Schedule Spring 2026 ===== ===== Schedule Spring 2026 =====
  
-  * **[PHD TRACK] 2026.01.08**: Maciej Mozolewski, PhD Candidate @ Jagiellonian University, [[Beyond Heatmaps: Explaining Time Series with Post-hoc Attribution Rules and Counterfactuals.]] +  * **[PHD TRACK] 2026.03.19**: Maciej Mozolewski,  PhD Candidate @ Jagiellonian University, [[#section20260319|Beyond Heatmaps: Explaining Time Series with Post-hoc Attribution Rules and Counterfactuals]] 
-    * Meeting link: [[https://teams.microsoft.com/meet/33976511999142?p=J72QcmpdpxsUSETlIe|MS Teams]] +    * Meeting link:[[https://teams.microsoft.com/meet/33976511999142?p=J72QcmpdpxsUSETlIe|MS Teams]] 
-    * Recording:  TBA +    * Recording:  [[https://ujchmura.sharepoint.com/:v:/t/Section_495645_1/IQClgAkSalrIQY9WZA3zoYEGAYpG_AE0hbLjYOMDK1lQ-74?e=k2FNg9|View]] 
-    * Presentation slides:  TBA+    * Presentation slides:  {{:aira:slides-maciej-mozolewski-2026-03-19.pdf|Download}}
  
   * **[RESEARCH TRACK] 2026.03.12**: Aleksandra Nabożny,  Assistant Professor @  Akademia Leona Koźmińskiego, [[#section20260312|What is and how to classify credibility of online health information?]]   * **[RESEARCH TRACK] 2026.03.12**: Aleksandra Nabożny,  Assistant Professor @  Akademia Leona Koźmińskiego, [[#section20260312|What is and how to classify credibility of online health information?]]
     * Meeting link:[[https://teams.microsoft.com/meet/3692206790088?p=w5o4kDyNtT0vCV2INH|MS Teams]]     * Meeting link:[[https://teams.microsoft.com/meet/3692206790088?p=w5o4kDyNtT0vCV2INH|MS Teams]]
     * Recording:  [[https://ujchmura.sharepoint.com/:v:/t/Section_495645_1/IQBrCjQaK0qpQqfuxU71nfp2Aa3dTo5YvAzqr6LZjtCknrg?e=8fEfFn|View]]     * Recording:  [[https://ujchmura.sharepoint.com/:v:/t/Section_495645_1/IQBrCjQaK0qpQqfuxU71nfp2Aa3dTo5YvAzqr6LZjtCknrg?e=8fEfFn|View]]
-    * Presentation slides:  TDA+    * Presentation slides:  {{:aira:slides-aleksandra-nabozny-2026-03-12.pdf|Download}}
  
   * **[RESEARCH TRACK] 2026.03.05**: Jaromir Savelka,  researcher associate @ Carnegie Mellon University, [[#section20260305|Large Language Models and Empirical Legal Studies.]]   * **[RESEARCH TRACK] 2026.03.05**: Jaromir Savelka,  researcher associate @ Carnegie Mellon University, [[#section20260305|Large Language Models and Empirical Legal Studies.]]
Line 541: Line 541:
  
  
-==== 2026-03-12 ====+==== 2026-03-19 ====
 <WRAP column 15%> <WRAP column 15%>
 {{ :aira:mtm-new-foto.jpeg?200| }} {{ :aira:mtm-new-foto.jpeg?200| }}
 +</WRAP>
 +
 +<WRAP column 75%>
 +
 +**Speaker**: Maciej Mozolewski,  PhD Candidate @ Jagiellonian University
 +
 +**Title**: Beyond Heatmaps: Explaining Time Series with Post-hoc Attribution Rules and Counterfactuals
 +
 +**Abstract**:
 +While complex machine learning models excel in time series classification, standard explanation methods often produce raw numeric feature attributions that fail to provide actionable insights for domain experts. To address this, I introduce PHAR (Post-hoc Attribution Rules), a method that transforms numeric attributions into structured, human-readable rules. By employing rule fusion, this approach localizes decision-relevant segments and expresses them as clear logical conditions directly on the raw series. To scale this logic from individual predictions to global model understanding, I present the Open-Box framework. Acting as an explainer-agnostic global surrogate, it aggregates local rules and resolves overlaps, allowing us to successfully extract tacit domain knowledge that experts might otherwise miss. Complementing these symbolic approaches, I also discuss a prototype-driven framework for generating sparse counterfactual explanations, which highlights the minimal, physiologically plausible signal modifications required to alter a prediction. By visually aligning logical conditions and prototypical examples with temporal patterns, this integrated ecosystem elevates time series explanations from simple scores to actionable knowledge, effectively bridging the gap between complex models and expert decision support.
 +
 +**Biogram**: 
 +Maciej Mozolewski is a PhD Researcher at Jagiellonian University and a member of the GEIST research group led by Prof. Grzegorz J. Nalepa. His work centers on human-centered explainable AI (XAI) for dynamic data, including multivariate time series and explanation visualization. He focuses on post-hoc methods that bridge the gap between complex machine learning models and human-intelligible explanations, specifically through symbolic rules and counterfactuals. With nearly a decade of experience as a software engineer and data scientist, he combines his technical background with an M.A. in Psychology to ground his academic research in real-world human decision-making.
 +</WRAP>
 +<WRAP clear></WRAP>
 +
 +==== 2026-03-12 ====
 +<WRAP column 15%>
 +{{ :aira:aleksandra-nabozny-foto.jpeg?200| }}
 </WRAP> </WRAP>
  
aira/start.1773399786.txt.gz · Last modified: 2026/03/13 11:03 by mzk
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0