Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revisionNext revisionBoth sides next revision | ||
xaila:start [2019/01/18 10:56] – gjn | xaila:start [2019/03/23 17:13] – [List of members of the program committee] gjn | ||
---|---|---|---|
Line 1: | Line 1: | ||
- | ====== The EXplainable AI in Law (XAILA) | + | ====== The EXplainable AI in Law (XAILA) Workshop ====== |
**XAILA webpage [[http:// | **XAILA webpage [[http:// | ||
Line 7: | Line 7: | ||
**Organized by:** Grzegorz J. Nalepa, Martin Atzmueller, Michał Araszkiewicz, | **Organized by:** Grzegorz J. Nalepa, Martin Atzmueller, Michał Araszkiewicz, | ||
+ | [[start2018|The first edition, XAILA2018]] was | ||
+ | **Organized by:** Grzegorz J. Nalepa, Martin Atzmueller, Michał Araszkiewicz, | ||
+ | at the [[http:// | ||
+ | [[start2018|See the dedicated page for XAILA2018]] | ||
- | ====== The EXplainable AI in Law (XAILA) 2018 Workshop ====== | + | ===== XAILA2019@ICAIL |
- | **XAILA 2018 webpage | + | The 2nd EXplainable AI in Law Workshop (XAILA2019@ICAIL) |
+ | at the | ||
+ | [[https://icail2019-cyberjustice.com|17th International Conference on Artificial Intelligence and Law (ICAIL2019)]] | ||
+ | June 17-21, 2019, Montréal (Qc.), Canada | ||
- | **Organized by:** Grzegorz J. Nalepa, Martin Atzmueller, Michał Araszkiewicz, | ||
- | at the [[http:// | ||
- | ===== Abstract ===== | + | Organizers: Grzegorz J. Nalepa, Martin Atzmueller, Michał Araszkiewicz, Paulo Novais |
- | Humanized AI emphasizes transparency and explainability in AI systems. These perspectives have an important ethical dimension, that is most often analyzed by philosophers. However, in order for it to be fruitful for AI engineers, it has to be properly focused. The intersection of Law and AI that makes it possible, as it provides a conceptual framework for ethical concepts and values in AI systems. A significant part of AI and Law research during the last two decades was devoted to operationalization of legal thinking with values. These results may now be reconsidered in a broader context, concerning the development of HAI systems and their social impact. It is a timely issue for the AI and Law community. | + | |
- | ===== Motivation | + | ==== Workshop |
- | Humanized AI (HAI) includes important perspectives in AI systems, including transparency and explainability (XAI). | + | Humanized AI (HAI) includes important perspectives in AI systems, including transparency and explainability (XAI). |
- | We believe, that it is the intersection of Law and AI that makes such an endeavor possible. Together, this lays foundations and provides a conceptual framework for ethical concepts and values | + | |
- | Our objective | + | ==== Topics ==== |
+ | |||
+ | The scope of the XAILA workshop encompasses a broad array of topics including, but not limited | ||
+ | * the notions of transparency, | ||
* non-functional design choices for explainable and transparent AI systems (including legal requirements) | * non-functional design choices for explainable and transparent AI systems (including legal requirements) | ||
- | * legal requirements for AI systems in specific domains | ||
* legal consequences of black-box AI systems | * legal consequences of black-box AI systems | ||
* legal criteria for explainable and transparent AI systems | * legal criteria for explainable and transparent AI systems | ||
* possible applications of XAI systems in the area of legal policy deliberation, | * possible applications of XAI systems in the area of legal policy deliberation, | ||
* ethical and legal implications of the use of AI systems in different spheres of societal life | * ethical and legal implications of the use of AI systems in different spheres of societal life | ||
+ | * the notion of right to explanation | ||
* relation of XAI and argumentation technologies | * relation of XAI and argumentation technologies | ||
* XAI models and architectures | * XAI models and architectures | ||
- | | + | * risk-based approach to analysis of AI systems and the influence of XAI on risk assessment |
- | | + | |
* incorporating ethical values into AI systems and the legal interpretation and consequences of this process | * incorporating ethical values into AI systems and the legal interpretation and consequences of this process | ||
* XAI, privacy and data protection | * XAI, privacy and data protection | ||
* possible legal aspects and consequences of affective systems | * possible legal aspects and consequences of affective systems | ||
- | * legal requirements and risks in AI applications | ||
* XAI, certification and compliance | * XAI, certification and compliance | ||
- | ===== Program committee ===== | + | ==== The intended audience |
+ | The workshop is of particular interest for the members of AI and Law community. However, it may also be found relevant by sociologists, | ||
+ | ==== List of members of the program committee ==== | ||
+ | // | ||
Martin Atzmueller, Tilburg University, The Netherlands\\ | Martin Atzmueller, Tilburg University, The Netherlands\\ | ||
Michal Araszkiewicz, | Michal Araszkiewicz, | ||
Line 68: | Line 76: | ||
Tomasz Żurek, Maria Curie-Skłodowska University of Lublin, Poland | Tomasz Żurek, Maria Curie-Skłodowska University of Lublin, Poland | ||
- | ===== Important dates ===== | + | ==== Important dates ==== |
- | * Submission: 23.< | + | Submission: 26.04.2019\\ |
- | | + | Notification: |
- | | + | Camera-ready: |
- | | + | Workshop: |
- | ===== Submission | + | ==== Submission |
- | Please submit using the dedicated Easychair installation [[https:// | + | Please submit using the dedicated Easychair installation |
+ | [[https:// | ||
- | We accept long (8 pages) and short (4 pages) papers in PDF. | + | We accept long (8 pages) and short/ |
- | Please use the [[http://www.iospress.nl/service/authors/ | + | Please use the ACM format: |
- | |IOS Press format.]] | + | |
- | ===== Proceedings ===== | ||
Workshop proceedings will be made available by CEUR-WS. | Workshop proceedings will be made available by CEUR-WS. | ||
A post workshop journal publication is considered. | A post workshop journal publication is considered. | ||
- | |||
- | ===== Call for papers ===== | ||
- | {{ : | ||
- | |||
- | ===== Accepted papers ===== | ||
- | |||
- | Regular papers: | ||
- | * Jakub Harašta. //Trust by Discrimination: | ||
- | * Giovanni Sileno, Alexander Boer and Tom Van Engers. //The Role of Normware in Trustworthy and Explainable AI// | ||
- | * Martijn Van Otterlo and Martin Atzmueller. //On Requirements and Design Criteria for Explainability in Legal AI// | ||
- | * Michał Araszkiewicz and Grzegorz J. Nalepa. // | ||
- | * Bernardo Alkmim, Edward Hermann Haeusler and Alexandre Rademaker. //Utilizing iALC to Formalize the Brazilian OAB Exam// | ||
- | * Muhammad Mudassar Yamin and Basel Katt. //Ethical Problems and Legal Issues in Development and Usage Autonomous Adversaries in Cyber Domain// | ||
- | |||
- | Short papers: | ||
- | * Michał Araszkiewicz and Tomasz Zurek. //A Dialogical Framework for Disputed Issues in Legal Interpretation// | ||
- | * Veronika Žolnerčíková. // | ||
- | |||
- | ===== Workshop Schedule ===== | ||
- | 9.45-10.10 < | ||
- | < | ||
- | 10.10-10.40 - Giovanni Sileno, Alexander Boer and Tom Van Engers. The Role of Normware in Trustworthy and Explainable AI\\ | ||
- | 10.40-11.00 - Michał Araszkiewicz and Tomasz Zurek. A Dialogical Framework for Disputed Issues in Legal Interpretation | ||
- | |||
- | 11.00-11.30 - **Coffee break** | ||
- | |||
- | 11.30-12.30 - **Keynote lecture: [[http:// | ||
- | Abstract: //AI's successes are these days so prominent that---if we believe reports in the news---the times seem near that machines perform better at any human task than humans themselves. At the same time the prominent AI technique of neural networks---today typically called deep learning---is often considered to lead to black box results, hindering transparency, | ||
- | |||
- | Bio: //Prof. Bart Verheij holds the chair of artificial intelligence and argumentation at the University of Groningen. He is head of the department of Artificial Intelligence in the Bernoulli Institute of Mathematics, | ||
- | |||
- | 12.30-13.00 - Michał Araszkiewicz and Grzegorz J. Nalepa. Explainability of Formal Models of Argumentation Applied to Legal Domain\\ | ||
- | 13.00-14.00 - **Lunch** | ||
- | |||
- | 14.00-14.30 - Martijn Van Otterlo and Martin Atzmueller. On Requirements and Design Criteria for Explainability in Legal AI | ||
- | \\ | ||
- | 14.30-15.00 - Muhammad Mudassar Yamin and Basel Katt. Ethical Problems and Legal Issues in Development and Usage Autonomous Adversaries in Cyber Domain | ||
- | |||
- | 15.00-15.30 - **Coffee break** | ||
- | |||
- | 15.30-16.00 - Bernardo Alkmim, Edward Hermann Haeusler and Alexandre Rademaker. Utilizing iALC to Formalize the Brazilian OAB Exam\\ | ||
- | 16.00-16.20 - Veronika Žolnerčíková. Homologation of Autonomous Machines from a Legal Perspective\\ | ||
- | 16.20-16:45 - **XAILA, closing & open discussion** | ||