Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
aira:start [2026/01/30 11:04] – [Schedule Autumn 2025] mtmaira:start [2026/03/19 16:26] (current) – [Schedule Spring 2026] mtm
Line 19: Line 19:
 Contact for enrollment of the JU PhD students [[https://fais.uj.edu.pl/wydzial/dziekanat|Mrs Ewa Lelek @ WFAIS]] Contact for enrollment of the JU PhD students [[https://fais.uj.edu.pl/wydzial/dziekanat|Mrs Ewa Lelek @ WFAIS]]
  
 +
 +===== Schedule Spring 2026 =====
 +
 +  * **[PHD TRACK] 2026.03.19**: Maciej Mozolewski,  PhD Candidate @ Jagiellonian University, [[#section20260319|Beyond Heatmaps: Explaining Time Series with Post-hoc Attribution Rules and Counterfactuals]]
 +    * Meeting link:[[https://teams.microsoft.com/meet/33976511999142?p=J72QcmpdpxsUSETlIe|MS Teams]]
 +    * Recording:  [[https://ujchmura.sharepoint.com/:v:/t/Section_495645_1/IQClgAkSalrIQY9WZA3zoYEGAYpG_AE0hbLjYOMDK1lQ-74?e=k2FNg9|View]]
 +    * Presentation slides:  {{:aira:slides-maciej-mozolewski-2026-03-19.pdf|Download}}
 +
 +  * **[RESEARCH TRACK] 2026.03.12**: Aleksandra Nabożny,  Assistant Professor @  Akademia Leona Koźmińskiego, [[#section20260312|What is and how to classify credibility of online health information?]]
 +    * Meeting link:[[https://teams.microsoft.com/meet/3692206790088?p=w5o4kDyNtT0vCV2INH|MS Teams]]
 +    * Recording:  [[https://ujchmura.sharepoint.com/:v:/t/Section_495645_1/IQBrCjQaK0qpQqfuxU71nfp2Aa3dTo5YvAzqr6LZjtCknrg?e=8fEfFn|View]]
 +    * Presentation slides:  {{:aira:slides-aleksandra-nabozny-2026-03-12.pdf|Download}}
 +
 +  * **[RESEARCH TRACK] 2026.03.05**: Jaromir Savelka,  researcher associate @ Carnegie Mellon University, [[#section20260305|Large Language Models and Empirical Legal Studies.]]
 +    * Meeting link:[[https://teams.microsoft.com/meet/3589150835363?p=pX2Mdw519jnSEB0ANJ|MS Teams]]
 +    * Recording:  [[https://ujchmura.sharepoint.com/:v:/t/Section_495645_1/IQCxc5x4pRtBSq7RCv8hEcJnAWjxIshRd8MDn2DuXI5FFlQ?e=l76jo0|View]]
 +    * Presentation slides:  {{:aira:slides-jaromir-savelka-2026-03-05.pdf|Download}}
  
 ===== Schedule Autumn 2025 ===== ===== Schedule Autumn 2025 =====
Line 521: Line 538:
  
  
 +
 +
 +
 +==== 2026-03-19 ====
 +<WRAP column 15%>
 +{{ :aira:mtm-new-foto.jpeg?200| }}
 +</WRAP>
 +
 +<WRAP column 75%>
 +
 +**Speaker**: Maciej Mozolewski,  PhD Candidate @ Jagiellonian University
 +
 +**Title**: Beyond Heatmaps: Explaining Time Series with Post-hoc Attribution Rules and Counterfactuals
 +
 +**Abstract**:
 +While complex machine learning models excel in time series classification, standard explanation methods often produce raw numeric feature attributions that fail to provide actionable insights for domain experts. To address this, I introduce PHAR (Post-hoc Attribution Rules), a method that transforms numeric attributions into structured, human-readable rules. By employing rule fusion, this approach localizes decision-relevant segments and expresses them as clear logical conditions directly on the raw series. To scale this logic from individual predictions to global model understanding, I present the Open-Box framework. Acting as an explainer-agnostic global surrogate, it aggregates local rules and resolves overlaps, allowing us to successfully extract tacit domain knowledge that experts might otherwise miss. Complementing these symbolic approaches, I also discuss a prototype-driven framework for generating sparse counterfactual explanations, which highlights the minimal, physiologically plausible signal modifications required to alter a prediction. By visually aligning logical conditions and prototypical examples with temporal patterns, this integrated ecosystem elevates time series explanations from simple scores to actionable knowledge, effectively bridging the gap between complex models and expert decision support.
 +
 +**Biogram**: 
 +Maciej Mozolewski is a PhD Researcher at Jagiellonian University and a member of the GEIST research group led by Prof. Grzegorz J. Nalepa. His work centers on human-centered explainable AI (XAI) for dynamic data, including multivariate time series and explanation visualization. He focuses on post-hoc methods that bridge the gap between complex machine learning models and human-intelligible explanations, specifically through symbolic rules and counterfactuals. With nearly a decade of experience as a software engineer and data scientist, he combines his technical background with an M.A. in Psychology to ground his academic research in real-world human decision-making.
 +</WRAP>
 +<WRAP clear></WRAP>
 +
 +==== 2026-03-12 ====
 +<WRAP column 15%>
 +{{ :aira:aleksandra-nabozny-foto.jpeg?200| }}
 +</WRAP>
 +
 +<WRAP column 75%>
 +
 +**Speaker**: Aleksandra Nabożny,  Assistant Professor @  Akademia Leona Koźmińskiego
 +
 +**Title**: What is and how to classify credibility of online health information?
 +
 +**Abstract**:
 +Misinformation in online health content poses a significant threat to public health. Despite years of effort, both the medical and Internet research communities continue to struggle to develop reliable methods for its identification and classification. Manual assessment by domain experts is accurate but prohibitively expensive and difficult to scale. At the same time, many automated approaches rely on overly simplistic assumptions. For example, the vast majority of computational studies use binary TRUE/FALSE labels and employ unstandardized annotation protocols, making experimental results difficult to reproduce.
 +In this seminar, I will present key challenges in the detection and classification of medical misinformation, illustrated with concrete examples and data. I will also propose practical countermeasures, including an annotation protocol focused on short text fragments that enables distributed and scalable data annotation while improving consistency and reproducibility.
 +
 +**Biogram**: 
 +Aleksandra Nabożny, PhD, specializes in the detection and analysis of medical disinformation, combining artificial intelligence methods with medical expertise. Since 2017, she has conducted research on the classification of health-related content in digital environments. Her work is carried out within an interdisciplinary research team that brings together physicians, sociologists, and computer scientists.
 +</WRAP>
 +<WRAP clear></WRAP>
 +
 +==== 2026-03-05 ====
 +<WRAP column 15%>
 +{{ :aira:jaromír-savelka-foto.jpg?200| }}
 +</WRAP>
 +
 +<WRAP column 75%>
 +
 +**Speaker**: Jaromir Savelka,  researcher associate @ Carnegie Mellon University
 +
 +**Title**: Large Language Models and Empirical Legal Studies.
 +
 +**Abstract**:
 +The lecture examines the potential of large language models (LLMs) for empirical legal studies. Traditional empirical legal research has relied on labor-intensive annotation of legal texts to identify, e.g., legally relevant factors, thematic patterns, or other semantic categories. Recent experiments with LLMs demonstrate remarkable capabilities regarding zero- and few-shot semantic annotation of legal texts at levels approaching trained lawyers. However, significant challenges remain: model brittleness to prompt formatting, the need for subject-matter expert supervision, difficulties distinguishing fine-grained legal categories, and methodological questions around the appropriate integration of AI tools into qualitative research workflows. The lecture explores how the emerging capabilities might reshape empirical legal scholarship as well as the implications for legal education, research methodology, and the evolving relationship between human expertise and machine assistance in the study of law.
 +
 +**Biogram**: 
 +Jaromir Savelka is a researcher associate in the Computer Science Department at Carnegie Mellon University. He is interested in the intersection of natural language processing and society. Jaromir's work focuses on developing human-centered AI to improve fairness, accessibility, and effectiveness of foundational systems like law and education. He builds and evaluates language technologies that empower legal professionals, expand access to justice, and create more adaptive and accessible learning environments.
 +</WRAP>
 +<WRAP clear></WRAP>
  
 ==== 2026-01-29 ==== ==== 2026-01-29 ====
aira/start.1769771091.txt.gz · Last modified: 2026/01/30 11:04 by mtm
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0