| Both sides previous revisionPrevious revisionNext revision | Previous revision |
| aira:start [2026/03/13 11:02] – [Schedule Spring 2026] mzk | aira:start [2026/04/16 14:27] (current) – [Schedule Spring 2026] mtm |
|---|
| ===== Schedule Spring 2026 ===== | ===== Schedule Spring 2026 ===== |
| |
| * **[PHD TRACK] 2026.01.08**: Maciej Mozolewski, PhD Candidate @ Jagiellonian University, [[Beyond Heatmaps: Explaining Time Series with Post-hoc Attribution Rules and Counterfactuals.]] | * **[RESEARCH TRACK] 2026.04.16**: Patrick Altmeyer, Researcher @ Delft University of Technology, [[#section20260416|Explaining Models or Modelling Explanations? Counterfactual Explanations and Algorithmic Recourse for Trustworthy AI]] |
| * Meeting link: [[https://teams.microsoft.com/meet/33976511999142?p=J72QcmpdpxsUSETlIe|MS Teams]] | * Meeting link:[[https://teams.microsoft.com/meet/372057591033533?p=u4Kxst3Eu3DsxRwxw8|MS Teams]] |
| * Recording: TBA | * Recording: [[https://ujchmura.sharepoint.com/:v:/t/Section_495645_1/IQBnzx56wup2SqdgH7vcGMsTAVLulmE_zJmARUq1q1R49og?e=HHKlf1|View]] |
| * Presentation slides: TBA | * Presentation slides: {{:aira:slides-patrick-altmeyer-2026-04-16.pdf|Download}} |
| | |
| | * **[PHD TRACK] 2026.04.09**: Dmytro Polishchuk, PhD Candidate @ Jagiellonian University, [[#section20260409|A Time-Aware GitHub Mining Framework for Empirical Software Quality Studies.]] |
| | * Meeting link: [[https://teams.microsoft.com/meet/357521097755345?p=DozT4DQGJzfZH4PXfD|MS Teams]] |
| | * Recording: [[https://ujchmura.sharepoint.com/:v:/t/Section_495645_1/IQBZU45fdhPPTJ780XqynOeuAVb4v5B4fIOgb0KD1EKkI5U?e=aGWY1Z|View]] |
| | * Presentation slides: TDA |
| | |
| | * **[PHD TRACK] 2026.03.26**: Mateusz Bułat, PhD Candidate @ Jagiellonian University, [[#section20260326|Formal Grammar Transducers in medical image analysis and segmentation correction]] |
| | * Meeting link:[[https://teams.microsoft.com/meet/349382193259763?p=arFNsNSL0nyoywz4lI|MS Teams]] |
| | * Recording: [[https://ujchmura.sharepoint.com/:v:/t/Section_495645_1/IQAeWFGK6mVbT5yfQdF9HrLmAXl55pp41keoHp8rmGsv7lM?e=uYof0z|View]] |
| | * Presentation slides: {{:aira:slides-mateusz-bulat-2026-03-26.pdf|Download}} |
| | |
| | * **[PHD TRACK] 2026.03.19**: Maciej Mozolewski, PhD Candidate @ Jagiellonian University, [[#section20260319|Beyond Heatmaps: Explaining Time Series with Post-hoc Attribution Rules and Counterfactuals]] |
| | * Meeting link:[[https://teams.microsoft.com/meet/33976511999142?p=J72QcmpdpxsUSETlIe|MS Teams]] |
| | * Recording: [[https://ujchmura.sharepoint.com/:v:/t/Section_495645_1/IQClgAkSalrIQY9WZA3zoYEGAYpG_AE0hbLjYOMDK1lQ-74?e=k2FNg9|View]] |
| | * Presentation slides: {{:aira:slides-maciej-mozolewski-2026-03-19.pdf|Download}} |
| |
| * **[RESEARCH TRACK] 2026.03.12**: Aleksandra Nabożny, Assistant Professor @ Akademia Leona Koźmińskiego, [[#section20260312|What is and how to classify credibility of online health information?]] | * **[RESEARCH TRACK] 2026.03.12**: Aleksandra Nabożny, Assistant Professor @ Akademia Leona Koźmińskiego, [[#section20260312|What is and how to classify credibility of online health information?]] |
| * Meeting link:[[https://teams.microsoft.com/meet/3692206790088?p=w5o4kDyNtT0vCV2INH|MS Teams]] | * Meeting link:[[https://teams.microsoft.com/meet/3692206790088?p=w5o4kDyNtT0vCV2INH|MS Teams]] |
| * Recording: [[https://ujchmura.sharepoint.com/:v:/t/Section_495645_1/IQBrCjQaK0qpQqfuxU71nfp2Aa3dTo5YvAzqr6LZjtCknrg?e=8fEfFn|View]] | * Recording: [[https://ujchmura.sharepoint.com/:v:/t/Section_495645_1/IQBrCjQaK0qpQqfuxU71nfp2Aa3dTo5YvAzqr6LZjtCknrg?e=8fEfFn|View]] |
| * Presentation slides: TDA | * Presentation slides: {{:aira:slides-aleksandra-nabozny-2026-03-12.pdf|Download}} |
| |
| * **[RESEARCH TRACK] 2026.03.05**: Jaromir Savelka, researcher associate @ Carnegie Mellon University, [[#section20260305|Large Language Models and Empirical Legal Studies.]] | * **[RESEARCH TRACK] 2026.03.05**: Jaromir Savelka, researcher associate @ Carnegie Mellon University, [[#section20260305|Large Language Models and Empirical Legal Studies.]] |
| |
| |
| | |
| | |
| | |
| | ==== 2026-04-16 ==== |
| | <WRAP column 15%> |
| | {{ :aira:patrick-altmeyer-foto.png?200| }} |
| | </WRAP> |
| | |
| | <WRAP column 75%> |
| | |
| | **Speaker**: Patrick Altmeyer, Researcher @ Delft University of Technology |
| | |
| | **Title**: Explaining Models or Modelling Explanations? Counterfactual Explanations and Algorithmic Recourse for Trustworthy AI |
| | |
| | **Abstract**: |
| | Counterfactual explanations (CE) and algorithmic recourse (AR) have emerged as promising approaches towards explaining opaque machine learning models and empowering individuals affected by them. This seminar will explore unexpected challenges and new opportunities in this context and demonstrate how counterfactuals can be used to improve the trustworthiness of models . It will summarize some of the main findings of Patrick's Ph.D. research: https://www.patalt.org/thesis/. The slides will be made available on the website ahead of the seminar. |
| | |
| | **Biogram**: |
| | Patrick is a trained economist, computer scientist, and researcher. In his research, he has challenged long-standing paradigms in explainable AI, developed novel methods to make AI more trustworthy, and questioned hyperbolic claims about AGI. Patrick is also experienced an open-source software developer and the core developer and maintainer of Trustworthy Artificial Intelligence in Julia (Taija). |
| | </WRAP> |
| | <WRAP clear></WRAP> |
| | |
| | ==== 2026-04-09 ==== |
| | <WRAP column 15%> |
| | {{ :aira:dmytro-polishchuk-foto.png?200| }} |
| | </WRAP> |
| | |
| | <WRAP column 75%> |
| | |
| | **Speaker**: Dmytro Polishchuk, PhD Candidate @ Jagiellonian University |
| | |
| | **Title**: A Time-Aware GitHub Mining Framework for Empirical Software Quality Studies. |
| | |
| | **Abstract**: |
| | This research focuses on designing an automated system to assess the quality of GitHub repositories based on a predefined quality model. The proposed approach evaluates repositories using a range of metrics, including commit history and its associated metadata (such as size, timestamps, and descriptions), code coverage, pull request review duration, issue resolution time, the number of open issues, code churn, and code complexity. |
| | In addition, the system may incorporate further quality indicators, potentially drawing on established frameworks such as ISO/IEC 25010:2011, to provide a more comprehensive and standardized evaluation. |
| | |
| | **Biogram**: |
| | Software engineer with a background in the telecom domain, specializing in network management systems and Software Defined Networking (SDN). Experienced in performance engineering, including byte code instrumentation, and has worked across various technologies, including IoT. Previously contributed to major companies like Ericsson, Cisco Systems, and Playtech. |
| | </WRAP> |
| | <WRAP clear></WRAP> |
| | |
| | ==== 2026-03-26 ==== |
| | <WRAP column 15%> |
| | {{ :aira:mateusz-bulat-foto.JPG?200| }} |
| | </WRAP> |
| | |
| | <WRAP column 75%> |
| | |
| | **Speaker**: Mateusz Bułat, PhD Candidate @ Jagiellonian University |
| | |
| | **Title**: Formal Grammar Transducers in medical image analysis and segmentation correction |
| | |
| | **Abstract**: |
| | Syntactic Pattern Recognition is data analysis approach stemming from formal grammars, formal languages and syntax analyser development. It’s particularly effective in analysing structures, both those found in natural world and in human-made artifacts. To this day many studies used SPR methods in diagnosis of medical subjects, like hearing impairments in neonates or in commercial field, like for electricity consumption forecast. |
| | Current PHD candidate’s research focuses on medical image analysis for patients with oligodendroglioma brain cancer. Goal of the endeavour is to support medics job in detecting and contouring cancer changes in brain. The glioma images acquired by MRI means are 2d scans. The segmentation would therefore benefit from a method that would correct it to represent a coherent 3d structure. |
| | |
| | **Biogram**: |
| | After working for a short time supporting eCRF (electronic Case Report Form) for various medical trials, Mateusz continues education as a first year PhD student at Technical Computer Science, Jagiellonian University. He holds both Bachelor's and Master's degrees in Computer Science, Jagiellonian University. |
| | He professionally worked on 3D medical image presentation, medical protocol analysis, Clinical Trial information flow and Adverse Event report forms. His current interests include medical image analysis, formal grammar application methods and hybrid AI systems. |
| | </WRAP> |
| | <WRAP clear></WRAP> |
| | |
| | ==== 2026-03-19 ==== |
| | <WRAP column 15%> |
| | {{ :aira:mtm-new-foto.jpeg?200| }} |
| | </WRAP> |
| | |
| | <WRAP column 75%> |
| | |
| | **Speaker**: Maciej Mozolewski, PhD Candidate @ Jagiellonian University |
| | |
| | **Title**: Beyond Heatmaps: Explaining Time Series with Post-hoc Attribution Rules and Counterfactuals |
| | |
| | **Abstract**: |
| | While complex machine learning models excel in time series classification, standard explanation methods often produce raw numeric feature attributions that fail to provide actionable insights for domain experts. To address this, I introduce PHAR (Post-hoc Attribution Rules), a method that transforms numeric attributions into structured, human-readable rules. By employing rule fusion, this approach localizes decision-relevant segments and expresses them as clear logical conditions directly on the raw series. To scale this logic from individual predictions to global model understanding, I present the Open-Box framework. Acting as an explainer-agnostic global surrogate, it aggregates local rules and resolves overlaps, allowing us to successfully extract tacit domain knowledge that experts might otherwise miss. Complementing these symbolic approaches, I also discuss a prototype-driven framework for generating sparse counterfactual explanations, which highlights the minimal, physiologically plausible signal modifications required to alter a prediction. By visually aligning logical conditions and prototypical examples with temporal patterns, this integrated ecosystem elevates time series explanations from simple scores to actionable knowledge, effectively bridging the gap between complex models and expert decision support. |
| | |
| | **Biogram**: |
| | Maciej Mozolewski is a PhD Researcher at Jagiellonian University and a member of the GEIST research group led by Prof. Grzegorz J. Nalepa. His work centers on human-centered explainable AI (XAI) for dynamic data, including multivariate time series and explanation visualization. He focuses on post-hoc methods that bridge the gap between complex machine learning models and human-intelligible explanations, specifically through symbolic rules and counterfactuals. With nearly a decade of experience as a software engineer and data scientist, he combines his technical background with an M.A. in Psychology to ground his academic research in real-world human decision-making. |
| | </WRAP> |
| | <WRAP clear></WRAP> |
| |
| ==== 2026-03-12 ==== | ==== 2026-03-12 ==== |