Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
Next revisionBoth sides next revision
xaila:start [2018/12/06 12:56] – [Workshop Schedule] Martin's talk gjnxaila:start [2019/01/18 10:56] gjn
Line 1: Line 1:
 +====== The EXplainable AI in Law (XAILA) 2019 Workshop ======
 +
 +**XAILA webpage [[http://xaila.geist.re]]**
 +
 +**The second edition of XAILA** will be held on the [[https://icail2019-cyberjustice.com|International Conference on Artificial Intelligence and Law (ICAIL)]], June 17-21, 2019, Montréal (Qc.), Canada
 +
 +**Organized by:** Grzegorz J. Nalepa, Martin Atzmueller, Michał Araszkiewicz, Paulo Novais
 +
 +
 ====== The EXplainable AI in Law (XAILA) 2018 Workshop ====== ====== The EXplainable AI in Law (XAILA) 2018 Workshop ======
  
Line 85: Line 94:
   * Jakub Harašta. //Trust by Discrimination: Technology Specific Regulation & Explainable AI//   * Jakub Harašta. //Trust by Discrimination: Technology Specific Regulation & Explainable AI//
   * Giovanni Sileno, Alexander Boer and Tom Van Engers. //The Role of Normware in Trustworthy and Explainable AI//   * Giovanni Sileno, Alexander Boer and Tom Van Engers. //The Role of Normware in Trustworthy and Explainable AI//
-  * Martijn Van Otterlo and Martin Atzmueller. //Two Tales of Explainability for Legal AI//+  * Martijn Van Otterlo and Martin Atzmueller. //On Requirements and Design Criteria for Explainability in Legal AI//
   * Michał Araszkiewicz and Grzegorz J. Nalepa. //Explainability of Formal Models of Argumentation Applied to Legal Domain//   * Michał Araszkiewicz and Grzegorz J. Nalepa. //Explainability of Formal Models of Argumentation Applied to Legal Domain//
   * Bernardo Alkmim, Edward Hermann Haeusler and Alexandre Rademaker. //Utilizing iALC to Formalize the Brazilian OAB Exam//   * Bernardo Alkmim, Edward Hermann Haeusler and Alexandre Rademaker. //Utilizing iALC to Formalize the Brazilian OAB Exam//
Line 95: Line 104:
  
 ===== Workshop Schedule ===== ===== Workshop Schedule =====
-9.30-9.40 - **Introduction** (conference chairs)\\ +9.45-10.10 <del>9.30-9.40</del> - **Introduction** (conference chairs)\\ 
-9.40-10.10 -  Jakub Harašta. Trust by Discrimination: Technology Specific Regulation & Explainable AI\\+<del>9.40-10.10 -  Jakub Harašta. Trust by Discrimination: Technology Specific Regulation & Explainable AI</del>\\
 10.10-10.40 - Giovanni Sileno, Alexander Boer and Tom Van Engers. The Role of Normware in Trustworthy and Explainable AI\\  10.10-10.40 - Giovanni Sileno, Alexander Boer and Tom Van Engers. The Role of Normware in Trustworthy and Explainable AI\\ 
 10.40-11.00 - Michał Araszkiewicz and Tomasz Zurek. A Dialogical Framework for Disputed Issues in Legal Interpretation 10.40-11.00 - Michał Araszkiewicz and Tomasz Zurek. A Dialogical Framework for Disputed Issues in Legal Interpretation
Line 103: Line 112:
  
 11.30-12.30 - **Keynote lecture: [[http://www.ai.rug.nl/~verheij|Bart Verheij]]: Good AI and Law**\\ 11.30-12.30 - **Keynote lecture: [[http://www.ai.rug.nl/~verheij|Bart Verheij]]: Good AI and Law**\\
-//Bart Verheij holds the chair of artificial intelligence and argumentation at the University of Groningen. He is head of the department of Artificial Intelligence in the Bernoulli Institute of Mathematics, Computer Science and Artificial Intelligence, Faculty of Science and Engineering. He participates in the Multi-Agent Systems research program. His research focuses on artificial intelligence and argumentation, often with the law as application domain. He is currently working on the connections between knowledge, data and reasoning, as a contribution to explainable, responsible and social artificial intelligence. He is president of the International Association for Artificial Intelligence and Law (IAAIL).//+Abstract: //AI's successes are these days so prominent that---if we believe reports in the news---the times seem near that machines perform better at any human task than humans themselves. At the same time the prominent AI technique of neural networks---today typically called deep learning---is often considered to lead to black box results, hindering transparency, explainability and responsibility, values that are central in the domain of law. So in that specific sense, the distance between neural network AI and the needs of the law is vast. In this talk, it is claimed that for good AI & Law we need an AI that can provide good answers to our questions, has good reasons for them and makes good choices. It is argued that the path towards good AI & Law requires the integration of data-driven and knowledge-based AI, and that argumentation as it occurs in the law can show the way to such integration.// 
 + 
 +Bio: //Prof. Bart Verheij holds the chair of artificial intelligence and argumentation at the University of Groningen. He is head of the department of Artificial Intelligence in the Bernoulli Institute of Mathematics, Computer Science and Artificial Intelligence, Faculty of Science and Engineering. He participates in the Multi-Agent Systems research program. His research focuses on artificial intelligence and argumentation, often with the law as application domain. He is currently working on the connections between knowledge, data and reasoning, as a contribution to explainable, responsible and social artificial intelligence. He is president of the International Association for Artificial Intelligence and Law (IAAIL).//
    
 12.30-13.00 - Michał Araszkiewicz and Grzegorz J. Nalepa. Explainability of Formal Models of Argumentation Applied to Legal Domain\\ 12.30-13.00 - Michał Araszkiewicz and Grzegorz J. Nalepa. Explainability of Formal Models of Argumentation Applied to Legal Domain\\
xaila/start.txt · Last modified: 2021/11/27 17:39 by gjn
Driven by DokuWiki Recent changes RSS feed Valid CSS Valid XHTML 1.0