Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revisionNext revisionBoth sides next revision | ||
xaila:start [2018/10/12 07:57] – 1st draft gjn | xaila:start [2019/01/18 10:56] – gjn | ||
---|---|---|---|
Line 1: | Line 1: | ||
+ | ====== The EXplainable AI in Law (XAILA) 2019 Workshop ====== | ||
+ | |||
+ | **XAILA webpage [[http:// | ||
+ | |||
+ | **The second edition of XAILA** will be held on the [[https:// | ||
+ | |||
+ | **Organized by:** Grzegorz J. Nalepa, Martin Atzmueller, Michał Araszkiewicz, | ||
+ | |||
+ | |||
====== The EXplainable AI in Law (XAILA) 2018 Workshop ====== | ====== The EXplainable AI in Law (XAILA) 2018 Workshop ====== | ||
- | Organized by: Grzegorz J. Nalepa, Martin Atzmueller, Michał Araszkiewicz, | + | **XAILA 2018 webpage [[http:// |
+ | |||
+ | **Organized by:** Grzegorz J. Nalepa, Martin Atzmueller, Michał Araszkiewicz, | ||
at the [[http:// | at the [[http:// | ||
Line 13: | Line 24: | ||
Our objective is to bring people from AI interested in XAI/HAI topics (possibly with broader background than just engineering) and create an ample space for discussion with people from the field of legal scholarship and/or legal practice. As many members of the AI and Law community join both perspectives, | Our objective is to bring people from AI interested in XAI/HAI topics (possibly with broader background than just engineering) and create an ample space for discussion with people from the field of legal scholarship and/or legal practice. As many members of the AI and Law community join both perspectives, | ||
* non-functional design choices for explainable and transparent AI systems (including legal requirements) | * non-functional design choices for explainable and transparent AI systems (including legal requirements) | ||
- | * legal requirements for AI systems in some specific | + | * legal requirements for AI systems in specific |
* legal consequences of black-box AI systems | * legal consequences of black-box AI systems | ||
* legal criteria for explainable and transparent AI systems | * legal criteria for explainable and transparent AI systems | ||
Line 45: | Line 56: | ||
Martijn von Otterlo, Tilburg University, The Netherlands\\ | Martijn von Otterlo, Tilburg University, The Netherlands\\ | ||
Adrian Paschke, Freie Universität Berlin, Germany\\ | Adrian Paschke, Freie Universität Berlin, Germany\\ | ||
+ | Jose Palma, Univesidad de Murcia, Spain\\ | ||
Monica Palmirani, Università di Bologna, Italy\\ | Monica Palmirani, Università di Bologna, Italy\\ | ||
Radim Polčák, Masaryk University, Czech Republic\\ | Radim Polčák, Masaryk University, Czech Republic\\ | ||
Marie Postma, Tilburg University, The Netherlands\\ | Marie Postma, Tilburg University, The Netherlands\\ | ||
+ | Juan Pavón, Universidad Complutense de Madrid, Spain\\ | ||
Ken Satoh, National Institute of Informatics, | Ken Satoh, National Institute of Informatics, | ||
Erich Schweighofer, | Erich Schweighofer, | ||
+ | Piotr Skrzypczyński, | ||
Dominik Ślęzak, Warsaw University, Poland\\ | Dominik Ślęzak, Warsaw University, Poland\\ | ||
Michal Valco, University of Presov, Slovakia\\ | Michal Valco, University of Presov, Slovakia\\ | ||
Line 56: | Line 70: | ||
===== Important dates ===== | ===== Important dates ===== | ||
- | * Submission: 14.11.2018 | + | * Submission: 23.< |
- | * Notification: | + | * Notification: |
- | * Camera-ready: | + | * Camera-ready: |
* Workshop: | * Workshop: | ||
Line 64: | Line 78: | ||
Please submit using the dedicated Easychair installation [[https:// | Please submit using the dedicated Easychair installation [[https:// | ||
- | We accept long (8 pages) and short (4 pages) papers. | + | We accept long (8 pages) and short (4 pages) papers |
Please use the [[http:// | Please use the [[http:// | ||
- | |IOS Press LaTeX format.]] | + | |IOS Press format.]] |
===== Proceedings ===== | ===== Proceedings ===== | ||
Line 73: | Line 87: | ||
===== Call for papers ===== | ===== Call for papers ===== | ||
+ | {{ : | ||
+ | |||
+ | ===== Accepted papers ===== | ||
+ | |||
+ | Regular papers: | ||
+ | * Jakub Harašta. //Trust by Discrimination: | ||
+ | * Giovanni Sileno, Alexander Boer and Tom Van Engers. //The Role of Normware in Trustworthy and Explainable AI// | ||
+ | * Martijn Van Otterlo and Martin Atzmueller. //On Requirements and Design Criteria for Explainability in Legal AI// | ||
+ | * Michał Araszkiewicz and Grzegorz J. Nalepa. // | ||
+ | * Bernardo Alkmim, Edward Hermann Haeusler and Alexandre Rademaker. //Utilizing iALC to Formalize the Brazilian OAB Exam// | ||
+ | * Muhammad Mudassar Yamin and Basel Katt. //Ethical Problems and Legal Issues in Development and Usage Autonomous Adversaries in Cyber Domain// | ||
+ | |||
+ | Short papers: | ||
+ | * Michał Araszkiewicz and Tomasz Zurek. //A Dialogical Framework for Disputed Issues in Legal Interpretation// | ||
+ | * Veronika Žolnerčíková. // | ||
+ | |||
+ | ===== Workshop Schedule ===== | ||
+ | 9.45-10.10 < | ||
+ | < | ||
+ | 10.10-10.40 - Giovanni Sileno, Alexander Boer and Tom Van Engers. The Role of Normware in Trustworthy and Explainable AI\\ | ||
+ | 10.40-11.00 - Michał Araszkiewicz and Tomasz Zurek. A Dialogical Framework for Disputed Issues in Legal Interpretation | ||
+ | |||
+ | 11.00-11.30 - **Coffee break** | ||
+ | |||
+ | 11.30-12.30 - **Keynote lecture: [[http:// | ||
+ | Abstract: //AI's successes are these days so prominent that---if we believe reports in the news---the times seem near that machines perform better at any human task than humans themselves. At the same time the prominent AI technique of neural networks---today typically called deep learning---is often considered to lead to black box results, hindering transparency, | ||
+ | |||
+ | Bio: //Prof. Bart Verheij holds the chair of artificial intelligence and argumentation at the University of Groningen. He is head of the department of Artificial Intelligence in the Bernoulli Institute of Mathematics, | ||
+ | |||
+ | 12.30-13.00 - Michał Araszkiewicz and Grzegorz J. Nalepa. Explainability of Formal Models of Argumentation Applied to Legal Domain\\ | ||
+ | 13.00-14.00 - **Lunch** | ||
+ | |||
+ | 14.00-14.30 - Martijn Van Otterlo and Martin Atzmueller. On Requirements and Design Criteria for Explainability in Legal AI | ||
+ | \\ | ||
+ | 14.30-15.00 - Muhammad Mudassar Yamin and Basel Katt. Ethical Problems and Legal Issues in Development and Usage Autonomous Adversaries in Cyber Domain | ||
+ | |||
+ | 15.00-15.30 - **Coffee break** | ||
+ | |||
+ | 15.30-16.00 - Bernardo Alkmim, Edward Hermann Haeusler and Alexandre Rademaker. Utilizing iALC to Formalize the Brazilian OAB Exam\\ | ||
+ | 16.00-16.20 - Veronika Žolnerčíková. Homologation of Autonomous Machines from a Legal Perspective\\ | ||
+ | 16.20-16:45 - **XAILA, closing & open discussion** | ||