Differences
This shows you the differences between two versions of the page.
| Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
| aira:start [2026/04/09 14:28] – [Schedule Spring 2026] mtm | aira:start [2026/04/16 14:27] (current) – [Schedule Spring 2026] mtm | ||
|---|---|---|---|
| Line 21: | Line 21: | ||
| ===== Schedule Spring 2026 ===== | ===== Schedule Spring 2026 ===== | ||
| + | |||
| + | * **[RESEARCH TRACK] 2026.04.16**: | ||
| + | * Meeting link: | ||
| + | * Recording: [[https:// | ||
| + | * Presentation slides: {{: | ||
| * **[PHD TRACK] 2026.04.09**: | * **[PHD TRACK] 2026.04.09**: | ||
| Line 550: | Line 555: | ||
| + | |||
| + | ==== 2026-04-16 ==== | ||
| + | <WRAP column 15%> | ||
| + | {{ : | ||
| + | </ | ||
| + | |||
| + | <WRAP column 75%> | ||
| + | |||
| + | **Speaker**: | ||
| + | |||
| + | **Title**: Explaining Models or Modelling Explanations? | ||
| + | |||
| + | **Abstract**: | ||
| + | Counterfactual explanations (CE) and algorithmic recourse (AR) have emerged as promising approaches towards explaining opaque machine learning models and empowering individuals affected by them. This seminar will explore unexpected challenges and new opportunities in this context and demonstrate how counterfactuals can be used to improve the trustworthiness of models . It will summarize some of the main findings of Patrick' | ||
| + | |||
| + | **Biogram**: | ||
| + | Patrick is a trained economist, computer scientist, and researcher. In his research, he has challenged long-standing paradigms in explainable AI, developed novel methods to make AI more trustworthy, | ||
| + | </ | ||
| + | <WRAP clear></ | ||
| ==== 2026-04-09 ==== | ==== 2026-04-09 ==== | ||