Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
Next revisionBoth sides next revision
xaila:start [2018/11/09 11:14] – [Motivation and workshop topics] gjnxaila:start [2019/03/23 17:27] – [List of members of the program committee] gjn
Line 1: Line 1:
-====== The EXplainable AI in Law (XAILA) 2018 Workshop ======+====== The EXplainable AI in Law (XAILA) Workshop ======
  
-**XAILA 2018 webpage [[http://xaila.geist.re]]**+**XAILA webpage [[http://xaila.geist.re]]**
  
 +**The second edition of XAILA** will be held on the [[https://icail2019-cyberjustice.com|International Conference on Artificial Intelligence and Law (ICAIL)]], June 17-21, 2019, Montréal (Qc.), Canada
 +
 +**Organized by:** Grzegorz J. Nalepa, Martin Atzmueller, Michał Araszkiewicz, Paulo Novais
 +
 +[[start2018|The first edition, XAILA2018]] was 
 **Organized by:** Grzegorz J. Nalepa, Martin Atzmueller, Michał Araszkiewicz, Paulo Novais\\ **Organized by:** Grzegorz J. Nalepa, Martin Atzmueller, Michał Araszkiewicz, Paulo Novais\\
 at the [[http://jurix2018.ai.rug.nl/|31st international conference on Legal Knowledge and Information Systems]] December 12–14, 2018 in Groningen, The Netherlands at the [[http://jurix2018.ai.rug.nl/|31st international conference on Legal Knowledge and Information Systems]] December 12–14, 2018 in Groningen, The Netherlands
 +[[start2018|See the dedicated page for XAILA2018]]
  
-===== Abstract ===== +===== XAILA2019@ICAIL =====
-Humanized AI emphasizes transparency and explainability in AI systems. These perspectives have an important ethical dimension, that is most often analyzed by philosophers. However, in order for it to be fruitful for AI engineers, it has to be properly focused. The intersection of Law and AI that makes it possible, as it provides a conceptual framework for ethical concepts and values in AI systems. A significant part of AI and Law research during the last two decades was devoted to operationalization of legal thinking with values. These results may now be reconsidered in a broader context, concerning the development of HAI systems and their social impact. It is a timely issue for the AI and Law community.+
  
-===== Motivation and workshop topics =====+The 2nd EXplainable AI in Law Workshop (XAILA2019@ICAIL) 
 +at the  
 +[[https://icail2019-cyberjustice.com|17th International Conference on Artificial Intelligence and Law (ICAIL2019)]] 
 +June 17-21, 2019, Montréal (Qc.), Canada 
  
-Humanized AI (HAI) includes important perspectives in AI systems, including transparency and explainability (XAI). Another one is the affective computing paradigmThese perspectives have an important ethical dimensionWhile ethical discussion is conducted by many philosophers, in order for it to be fruitful for engineers in AIit has to be properly focused with specific concepts and operationalized. + 
-We believethat it is the intersection of Law and AI that makes such an endeavor possible. Together, this lays foundations and provides a conceptual framework for ethical concepts and values in AI systems. Therefore, when discussing ethical consequences and considerations of transparent and explainable AI systems, including affective systems, we should focus on the legal conceptual frameworkA significant part of AI and Law research during the last two decades was devoted to operationalization of legal thinking with valuesThese results may now be reconsidered in a broader contextconcerning the development of XAI systems and their social impact. As such it is a very timely issue for the AI and Law community. +Organizers: Grzegorz J. Nalepa, Martin Atzmueller, Michał Araszkiewicz, Paulo Novais 
-Our objective is to bring people from AI interested in XAI/HAI topics (possibly with broader background than just engineering) and create an ample space for discussion with people from the field of legal scholarship and/or legal practice. As many members of the AI and Law community join both perspectives, the JURIX conference should be assessed as perfect venue for the workshopTogether we would like to address some questions like:+ 
 +==== Workshop and description ==== 
 + 
 +Humanized AI (HAI) includes important perspectives in AI systems, including transparency and explainability (XAI). The idea of XAI has recently emerged as one of the most debated topics not only in the scientific community, but also in the general publicThe design and use of AI algorithms raises important engineering, societal, ethical and legal challengesIn particularAI-enhanced tools are used in commercial settings (advertisemente-marketing)civil and labour law relations (such as employee assessment and recruitment processes), financial markets, penitentiary systems as well as in medical diagnosis etcThe decisions taken with the support of or directly based on the results generated by AI have more and more impact on the life of societies of individualsMachine Learning tools are also intensively developed with an intention of application in the field of legal services provision and legal decision-making processUnderstandability of the operations of these algorithmsas well as the provisioning of explanations with regard to the decision making process in the AI systems is of profound importance. Furthermore, only these features can lay foundations for the proper discussion of the ethical aspects of AI systemsThe workshop’s idea is to discuss the current state of the art with respect to these broad yet important multidisciplinary challenges as well as the prospects for the future. 
 + 
 +==== Topics ==== 
 + 
 +The scope of the XAILA workshop encompasses a broad array of topics including, but not limited to: 
 +  * the notions of transparency, interpretability and explainability in XAI
   * non-functional design choices for explainable and transparent AI systems (including legal requirements)   * non-functional design choices for explainable and transparent AI systems (including legal requirements)
-  * legal requirements for AI systems in specific domains 
   * legal consequences of black-box AI systems   * legal consequences of black-box AI systems
   * legal criteria for explainable and transparent AI systems   * legal criteria for explainable and transparent AI systems
   * possible applications of XAI systems in the area of legal policy deliberation, legal practice, teaching and research   * possible applications of XAI systems in the area of legal policy deliberation, legal practice, teaching and research
   * ethical and legal implications of the use of AI systems in different spheres of societal life   * ethical and legal implications of the use of AI systems in different spheres of societal life
 +  * the notion of right to explanation
   * relation of XAI and argumentation technologies   * relation of XAI and argumentation technologies
   * XAI models and architectures   * XAI models and architectures
-  * understanding of the notions of explanation and transparency in XAI  +  * risk-based approach to analysis of AI systems and the influence of XAI on risk assessment
-  * risk-based approach to analysis of AI systems and the influence of XAI on risk assessment +
   * incorporating ethical values into AI systems and the legal interpretation and consequences of this process   * incorporating ethical values into AI systems and the legal interpretation and consequences of this process
   * XAI, privacy and data protection   * XAI, privacy and data protection
   * possible legal aspects and consequences of affective systems   * possible legal aspects and consequences of affective systems
-  * legal requirements and risks in AI applications 
   * XAI, certification and compliance   * XAI, certification and compliance
  
-===== Program committee =====+==== The intended audience ==== 
 +The workshop is of particular interest for the members of AI and Law community. However, it may also be found relevant by sociologists, lawyers (e.g. judges), data protection officers, business people, policymakers, legislators, public officers, NGO and last but certainly not least engineers.  Our objective is to bring people from AI interested in XAI/HAI topics and create an ample space for discussion with people from the field of legal scholarship and/or legal practice. 
 + 
 +==== List of members of the program committee ====  
 +//tentative//
  
 Martin Atzmueller, Tilburg University, The Netherlands\\ Martin Atzmueller, Tilburg University, The Netherlands\\
Line 47: Line 65:
 Martijn von Otterlo, Tilburg University, The Netherlands\\ Martijn von Otterlo, Tilburg University, The Netherlands\\
 Adrian Paschke, Freie Universität Berlin, Germany\\ Adrian Paschke, Freie Universität Berlin, Germany\\
 +Jose Palma, Univesidad de Murcia, Spain\\
 Monica Palmirani, Università di Bologna, Italy\\ Monica Palmirani, Università di Bologna, Italy\\
 Radim Polčák, Masaryk University, Czech Republic\\ Radim Polčák, Masaryk University, Czech Republic\\
 Marie Postma, Tilburg University, The Netherlands\\ Marie Postma, Tilburg University, The Netherlands\\
 +Juan Pavón, Universidad Complutense de Madrid, Spain\\
 Ken Satoh, National Institute of Informatics, Japan\\ Ken Satoh, National Institute of Informatics, Japan\\
 Erich Schweighofer, University of Vienna, Austria\\ Erich Schweighofer, University of Vienna, Austria\\
 +Piotr Skrzypczyński, Poznań University of Technology, Poland\\
 Dominik Ślęzak, Warsaw University, Poland\\ Dominik Ślęzak, Warsaw University, Poland\\
 Michal Valco, University of Presov, Slovakia\\ Michal Valco, University of Presov, Slovakia\\
 Tomasz Żurek, Maria Curie-Skłodowska University of Lublin, Poland Tomasz Żurek, Maria Curie-Skłodowska University of Lublin, Poland
  
-===== Important dates =====+==== Important dates ====
  
-  * Submission: 14.11.2018 +Submission: 26.04.2019\\ 
-  Notification:  23.11.2018 +Notification:  10.05.2019\\ 
-  Camera-ready: 30.11.2018 +Camera-ready: 31.05.2019\\ 
-  Workshop:  12.12.2018+Workshop:  17.06.2019
  
-===== Submission ===== +==== Submission and proceedings ====  
-Please submit using the dedicated Easychair installation [[https://easychair.org/conferences/?conf=xaila2018]]+Please submit using the dedicated Easychair installation  
 +[[https://easychair.org/conferences/?conf=xaila2019icail]]
  
-We accept long (8 pages) and short (4 pages) papers in PDF.  +We accept long (8 pages) and short/position (4 pages) papers in PDF only.  
-Please use the [[http://www.iospress.nl/service/authors/latex-and-word-tools-for-book-authors/ +Please use the ACM format: [[https://www.acm.org/publications/proceedings-template]]
-|IOS Press format.]]+
  
-===== Proceedings ===== 
 Workshop proceedings will be made available by CEUR-WS.  Workshop proceedings will be made available by CEUR-WS. 
 A post workshop journal publication is considered. A post workshop journal publication is considered.
  
-===== Call for papers ===== 
-{{ :xaila:xaila-cfp-v1.pdf }} 
xaila/start.txt · Last modified: 2021/11/27 17:39 by gjn
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0