This is an old revision of the document!


The EXplainable AI in Law (XAILA) 2018 Workshop

XAILA 2018 webpage http://xaila.geist.re

Organized by: Grzegorz J. Nalepa, Martin Atzmueller, Michał Araszkiewicz, Paulo Novais
at the 31st international conference on Legal Knowledge and Information Systems December 12–14, 2018 in Groningen, The Netherlands

Abstract

Humanized AI emphasizes transparency and explainability in AI systems. These perspectives have an important ethical dimension, that is most often analyzed by philosophers. However, in order for it to be fruitful for AI engineers, it has to be properly focused. The intersection of Law and AI that makes it possible, as it provides a conceptual framework for ethical concepts and values in AI systems. A significant part of AI and Law research during the last two decades was devoted to operationalization of legal thinking with values. These results may now be reconsidered in a broader context, concerning the development of HAI systems and their social impact. It is a timely issue for the AI and Law community.

Motivation and workshop topics

Humanized AI (HAI) includes important perspectives in AI systems, including transparency and explainability (XAI). Another one is the affective computing paradigm. These perspectives have an important ethical dimension. While ethical discussion is conducted by many philosophers, in order for it to be fruitful for engineers in AI, it has to be properly focused with specific concepts and operationalized. We believe, that it is the intersection of Law and AI that makes such an endeavor possible. Together, this lays foundations and provides a conceptual framework for ethical concepts and values in AI systems. Therefore, when discussing ethical consequences and considerations of transparent and explainable AI systems, including affective systems, we should focus on the legal conceptual framework. A significant part of AI and Law research during the last two decades was devoted to operationalization of legal thinking with values. These results may now be reconsidered in a broader context, concerning the development of XAI systems and their social impact. As such it is a very timely issue for the AI and Law community. Our objective is to bring people from AI interested in XAI/HAI topics (possibly with broader background than just engineering) and create an ample space for discussion with people from the field of legal scholarship and/or legal practice. As many members of the AI and Law community join both perspectives, the JURIX conference should be assessed as perfect venue for the workshop. Together we would like to address some questions like:

  • non-functional design choices for explainable and transparent AI systems (including legal requirements)
  • legal requirements for AI systems in specific domains
  • legal consequences of black-box AI systems
  • legal criteria for explainable and transparent AI systems
  • possible applications of XAI systems in the area of legal policy deliberation, legal practice, teaching and research
  • ethical and legal implications of the use of AI systems in different spheres of societal life
  • relation of XAI and argumentation technologies
  • XAI models and architectures
  • understanding of the notions of explanation and transparency in XAI
  • risk-based approach to analysis of AI systems and the influence of XAI on risk assessment
  • incorporating ethical values into AI systems and the legal interpretation and consequences of this process
  • XAI, privacy and data protection
  • possible legal aspects and consequences of affective systems
  • legal requirements and risks in AI applications
  • XAI, certification and compliance

Program committee

Martin Atzmueller, Tilburg University, The Netherlands
Michal Araszkiewicz, Jagiellonian University, Poland
Kevin Ashley, University of Pittsburgh, USA
Szymon Bobek, AGH University, Poland
Jörg Cassens, University of Hildesheim, Germany
David Camacho, Universidad Autonoma de Madrid, Spain
Pompeu Casanovas, Universitat Autonoma de Barcelona, Spain
Colette Cuijpers, Tilburg University, The Netherlands
Rafał Michalczak, Jagiellonian University, Poland
Teresa Moreira, University of Minho Braga, Portugal
Paulo Novais, University of Minho Braga, Portugal
Grzegorz J. Nalepa, AGH University, Jagiellonian University, Poland
Tiago Oliveira, National Institute of Informatics, Japan
Martijn von Otterlo, Tilburg University, The Netherlands
Adrian Paschke, Freie Universität Berlin, Germany
Jose Palma, Univesidad de Murcia, Spain
Monica Palmirani, Università di Bologna, Italy
Radim Polčák, Masaryk University, Czech Republic
Marie Postma, Tilburg University, The Netherlands
Juan Pavón, Universidad Complutense de Madrid, Spain
Ken Satoh, National Institute of Informatics, Japan
Erich Schweighofer, University of Vienna, Austria
Piotr Skrzypczyński, Poznań University of Technology, Poland
Dominik Ślęzak, Warsaw University, Poland
Michal Valco, University of Presov, Slovakia
Tomasz Żurek, Maria Curie-Skłodowska University of Lublin, Poland

Important dates

  • Submission: 23.14.11.2018
  • Notification: 30.23.11.2018
  • Camera-ready: 07.12.30.11.2018
  • Workshop: 12.12.2018

Submission

Please submit using the dedicated Easychair installation https://easychair.org/conferences/?conf=xaila2018

We accept long (8 pages) and short (4 pages) papers in PDF. Please use the IOS Press format.

Proceedings

Workshop proceedings will be made available by CEUR-WS. A post workshop journal publication is considered.

Call for papers

Accepted papers

Regular papers:

  • Jakub Harašta. Trust by Discrimination: Technology Specific Regulation & Explainable AI
  • Giovanni Sileno, Alexander Boer and Tom Van Engers. The Role of Normware in Trustworthy and Explainable AI
  • Martijn Van Otterlo and Martin Atzmueller. Two Tales of Explainability for Legal AI
  • Michał Araszkiewicz and Grzegorz J. Nalepa. Explainability of Formal Models of Argumentation Applied to Legal Domain
  • Bernardo Alkmim, Edward Hermann Haeusler and Alexandre Rademaker. Utilizing iALC to Formalize the Brazilian OAB Exam
  • Muhammad Mudassar Yamin and Basel Katt. Ethical Problems and Legal Issues in Development and Usage Autonomous Adversaries in Cyber Domain

Short papers:

  • Michał Araszkiewicz and Tomasz Zurek. A Dialogical Framework for Disputed Issues in Legal Interpretation
  • Veronika Žolnerčíková. Homologation of Autonomous Machines from a Legal Perspective

Workshop Schedule

9.30-9.40 - Introduction (conference chairs)
9.40-10.10 - Jakub Harašta. Trust by Discrimination: Technology Specific Regulation & Explainable AI
10.10-10.40 - Giovanni Sileno, Alexander Boer and Tom Van Engers. The Role of Normware in Trustworthy and Explainable AI
10.40-11.00 - Michał Araszkiewicz and Tomasz Zurek. A Dialogical Framework for Disputed Issues in Legal Interpretation

11.00-11.30 - Coffee break

11.30-12.30 - Keynote lecture: Bart Verheij: Good AI and Law
Bart Verheij holds the chair of artificial intelligence and argumentation at the University of Groningen. He is head of the department of Artificial Intelligence in the Bernoulli Institute of Mathematics, Computer Science and Artificial Intelligence, Faculty of Science and Engineering. He participates in the Multi-Agent Systems research program. His research focuses on artificial intelligence and argumentation, often with the law as application domain. He is currently working on the connections between knowledge, data and reasoning, as a contribution to explainable, responsible and social artificial intelligence. He is president of the International Association for Artificial Intelligence and Law (IAAIL).

12.30-13.00 - Michał Araszkiewicz and Grzegorz J. Nalepa. Explainability of Formal Models of Argumentation Applied to Legal Domain
13.00-14.00 - Lunch

14.00-14.30 - Martijn Van Otterlo and Martin Atzmueller. Two Tales of Explainability for Legal AI
14.30-15.00 - Muhammad Mudassar Yamin and Basel Katt. Ethical Problems and Legal Issues in Development and Usage Autonomous Adversaries in Cyber Domain

15.00-15.30 - Coffee break

15.30-16.00 - Bernardo Alkmim, Edward Hermann Haeusler and Alexandre Rademaker. Utilizing iALC to Formalize the Brazilian OAB Exam
16.00-16.20 - Veronika Žolnerčíková. Homologation of Autonomous Machines from a Legal Perspective
16.20-16:45 - XAILA, closing & open discussion

xaila/start.1544092613.txt.gz · Last modified: 2018/12/06 10:36 by gjn
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0