Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revisionNext revisionBoth sides next revision | ||
xaila:start [2018/10/12 07:57] – 1st draft gjn | xaila:start [2020/09/30 21:48] – 2020 gjn | ||
---|---|---|---|
Line 1: | Line 1: | ||
- | ====== The EXplainable AI in Law (XAILA) | + | ====== The EXplainable |
- | Organized by: Grzegorz J. Nalepa, Martin Atzmueller, Michał Araszkiewicz, | + | **The main XAILA webpage is [[http://xaila.geist.re]]** |
- | at the [[http://jurix2018.ai.rug.nl/ | + | |
- | ===== Abstract ===== | + | XAILA is an interdisciplinary workshop on the intersection of AI and Law, focusing on the important issues |
- | Humanized AI emphasizes transparency and explainability in AI systems. These perspectives have an important ethical dimension, that is most often analyzed by philosophers. However, in order for it to be fruitful for AI engineers, it has to be properly focused. The intersection | + | |
- | ===== Motivation | + | In 2020 we are having the 3rd edition of XAILA, organized by Grzegorz J. Nalepa, Michał Araszkiewicz, |
+ | (Jagiellonian University, Poland; University of Groningen, The Netherlands; | ||
+ | JURIX 2020 is the 33rd International Conference on Legal Knowledge and Information Systems organised by the Foundation for Legal Knowledge Based Systems (JURIX) since 1988. JURIX 2020 is co-hosted by the Institue of Law and Technology (Faculty of Law, Masaryk University, Brno) and the Knowledge-based Software Systems Group (Department of Computer Science, Faculty of Electrical Engineering, | ||
- | Humanized AI (HAI) includes important perspectives | + | See more information on the [[start# |
- | We believe, that it is the intersection | + | |
- | Our objective is to bring people from AI interested in XAI/HAI topics | + | ===== XAILA 2020 at JURIX2020 ===== |
- | * non-functional design choices for explainable and transparent AI systems | + | ==== Call for Papers ==== |
- | * legal requirements for AI systems in some specific domain | + | {{ : |
+ | |||
+ | ==== Motivation for the workshop ==== | ||
+ | |||
+ | In the last several years we have observed a growing interest | ||
+ | |||
+ | Recently, the term responsible AI (RAI) has been coined as a step beyond XAI. Discussion of RAI has been again strongly influenced by the “ethical” perspective. However, as practitioners in our fields we are convinced, that the advancements | ||
+ | |||
+ | ==== Topics of interest ==== | ||
+ | |||
+ | Our objective is to bring people from AI interested in XAI and RAI topics | ||
+ | |||
+ | * the notions of transparency, | ||
+ | * non-functional design choices for explainable and transparent AI systems | ||
* legal consequences of black-box AI systems | * legal consequences of black-box AI systems | ||
- | * legal criteria for explainable | + | * legal criteria |
+ | * criteria of legal responsibility discussed in the context of intelligent systems operation and the role of explainability in liability ascription | ||
* possible applications of XAI systems in the area of legal policy deliberation, | * possible applications of XAI systems in the area of legal policy deliberation, | ||
- | * ethical and legal implications of the use of AI systems in different spheres of societal life | + | * legal implications of the use of AI systems in different spheres of societal life |
- | * relation of XAI and argumentation technologies | + | * the notion of right to explanation |
- | * XAI models | + | * |
- | * understanding of the notions of explanation | + | * approaches |
- | * risk-based approach to analysis of AI systems and the influence of XAI on risk assessment | + | * XAI, RAI and declarative domain knowledge |
- | * incorporating | + | * risk-based approach to analysis of AI systems and the influence of XAI on risk assessment |
- | * XAI, privacy and data protection | + | * incorporation of ethical values into AI systems, its legal interpretation and consequences |
- | * possible legal aspects | + | * XAI, privacy and data protection |
- | * legal requirements and risks in AI applications | + | |
* XAI, certification and compliance | * XAI, certification and compliance | ||
- | ===== Program committee ===== | + | ==== Workshop format |
+ | Workshop format: paper presentations + panel discussion, invited talk/s. | ||
- | Martin Atzmueller, | + | Intended audience are practitioners and theorists from both law and AI. |
+ | |||
+ | ==== Program Committee ==== | ||
+ | |||
+ | List of members of the program committee (to be confirmed): | ||
+ | Martin Atzmueller, | ||
Michal Araszkiewicz, | Michal Araszkiewicz, | ||
Kevin Ashley, University of Pittsburgh, USA\\ | Kevin Ashley, University of Pittsburgh, USA\\ | ||
Line 37: | Line 56: | ||
David Camacho, Universidad Autonoma de Madrid, Spain\\ | David Camacho, Universidad Autonoma de Madrid, Spain\\ | ||
Pompeu Casanovas, Universitat Autonoma de Barcelona, Spain\\ | Pompeu Casanovas, Universitat Autonoma de Barcelona, Spain\\ | ||
- | Colette Cuijpers, Tilburg University, The Netherlands\\ | ||
- | Rafał Michalczak, Jagiellonian University, Poland\\ | ||
Teresa Moreira, University of Minho Braga, Portugal\\ | Teresa Moreira, University of Minho Braga, Portugal\\ | ||
Paulo Novais, University of Minho Braga, Portugal\\ | Paulo Novais, University of Minho Braga, Portugal\\ | ||
Line 50: | Line 67: | ||
Ken Satoh, National Institute of Informatics, | Ken Satoh, National Institute of Informatics, | ||
Erich Schweighofer, | Erich Schweighofer, | ||
- | Dominik Ślęzak, Warsaw University, Poland\\ | + | Michal Valco, |
- | Michal Valco, University | + | Tomasz Żurek, Maria Curie-Skłodowska University of Lublin, Poland\\ |
- | Tomasz Żurek, Maria Curie-Skłodowska University of Lublin, Poland | + | |
- | ===== Important dates ===== | + | ==== Important dates ==== |
- | * Submission: 14.11.2018 | + | Submission: 26.10.2020\\ |
- | | + | Notification: |
- | | + | Camera-ready: |
- | | + | Workshop: |
+ | |||
+ | ==== Submission details ==== | ||
+ | |||
+ | A dedicated Easychair installation is provided at [[https:// | ||
+ | |||
+ | Workshop proceedings will be made available by CEUR-WS. A post workshop journal publication is considered. | ||
+ | |||
+ | ===== Past editions of XAILA ===== | ||
+ | |||
+ | [[start2018|The first edition, XAILA2018]] was | ||
+ | Organized by: Grzegorz J. Nalepa, Martin Atzmueller, Michał Araszkiewicz, | ||
+ | at the [[http:// | ||
+ | [[start2018|See the dedicated page for XAILA2018]] | ||
- | ===== Submission ===== | + | XAILA 2018 proceedings can be found at [[http://ceur-ws.org/Vol-2381]] |
- | Please submit using the dedicated Easychair installation | + | |
- | We accept long (8 pages) and short (4 pages) papers. | + | We also proposed XAILA to be held on the [[https://icail2019-cyberjustice.com|International Conference on Artificial Intelligence and Law (ICAIL)]], June 17-21, 2019, Montréal (Qc.), Canada. While the workshop was met with a large interest, |
- | Please use the [[http://www.iospress.nl/ | + | [[icail2019|See the dedicated page for XAILA2019@ICAIL]] |
- | |IOS Press LaTeX format.]] | + | |
- | ===== Proceedings ===== | + | [[start2019|The second edition of XAILA, XAILA2019]] was Organized |
- | Workshop proceedings will be made available | + | at the [[https:// |
- | A post workshop journal publication is considered. | + | December 11, 2019, Madrid, Spain in ETSI Minas y Energía School (Universidad Politécnica de Madrid) |
+ | [[start2019|See the dedicated page for XAILA2019]] | ||
- | ===== Call for papers ===== | + | XAILA 2019 proceedings can be found at [[http:// |