A Workshop for WebSci'2020
Explanations for AI: Computable or Not?
This workshop (hosted by the WebSci'20 conference) will focus on socially-sensitive decisions made or assisted by AI systems which often involve more complex (e.g. machine learning) and opaque forms (also referred to as black-box algorithms) of underlying decision-making processes. The aim is to stimulate a lively debate on whether explanations for AI are computable or not by bringing together researchers, practitioners and representatives of AI (or AI-assisted) decision-making systems.
Automated decision making continues to be used for a variety of purposes within a multitude of sectors. Ultimately, what makes a 'good' explanation is a focus not only for the designers and developers of AI systems, but for many disciplines, including law, philosophy, psychology, history, sociology and human-computer interaction. The principal objective of this workshop is to build a cross-sectoral, multi-disciplinary and international network of people focusing on explanations for AI, and an agenda to drive this work forward.
A key goal is to uncover key arguments for and against the computability of explanations for AI related to decision-making that is likely to have major impacts on individuals (socially-sensitive decision-making). In view of the growing complexity and opacity of underlying decision-making processes and the proliferation of automated decision-making systems, it is unsurprising that the notion of explainability is in receipt of close attention -- particularly in light of the GDPR that gave rise to the explainability debate. While explanations are of critical importance for all socially-sensitive decisions (i.e. regardless of whether they are determined through manual or automated processes), this workshop is specifically focused on socially-sensitive decisions made or assisted by AI systems which often involve more complex (e.g. machine learning) and opaque forms (also referred to as black-box algorithms) of underlying decision-making processes.
We ask participants to consider whether explanations for AI can be computable or not. For the purposes of this workshop, we define a 'computable explanation' as follows: Explanation criteria derived from applicable legal and governance frameworks are translated into a set of rules that can be processed by explanation generating algorithms. We are considering the following key issues:
- The extent in which the process that generates explanations for AI can and should be automated. E.g. what are the key methodologies and principal technical, legal and organizational challenges for generating computable explanations? How does the generation process itself remain accountable? Does it require meaningful human involvement?
- The principal benefits and limitations of computable explanations in comparison to non-computable explanations for AI as well as other methods for accountability.
Attendees are invited to submit short position papers of no more than three pages to the workshop organizers. Received position papers will be reviewed and accepted/rejected for inclusion in the workshop by the workshop organizers. Where a paper is accepted, the attendee will have the opportunity to present their ideas in 10 minutes during the workshop. We hope to stimulate a lively debate on whether explanations for AI are computable or not by providing time for an interactive discussion after each paper.
The workshop will follow WebSci'20 advice with regards hosting arrangements, which might include running the workshop with virtual presentations and group discussions. Updates will be posted as the situation develops.
List of topics
Topics of interest include, but are not limited to:
- Critiques and advantages of explanations for AI, including the extent in which explanations can or should be made computable.
- Use cases, scenarios and/or practical experience of explanations for AI, such as: the rationale, technologies and/or organisations measures used; and, accounts from different perspectives – e.g. software designers, implementers and those subject to automated decision-making.
- Legal requirements for explanations, and the extent in which data ethics may drive explanations for AI.
- Reflections on the similarities and differences of explanations for AI decisions and manual decisions, as well as what makes a ‘good’ explanation and the etymology of explanations for socially-sensitive decisions.
- Lessons from other related areas, such as challenges faced in the areas of computable contracts and compliance automation.
All papers must be original. Short papers are invited of at most 3 pages. The limit of pages includes the references. The authors shall adopt the current ACM SIG Conference proceedings template (
acmart.cls), which is available at ACM guidelines. All contributions will be judged by the Organizing Committee upon rigorous peer review standards for quality and fit to the workshop. We will adopt a single-blind review process. Do not anonymize your submissions. Submissions without authorship information will be desk-rejected without review.
Papers should be submitted via EasyChair at https://easychair.org/cfp/exAI2020
Successful authors will have the opportunity to showcase their work in the form of posters on July 8 at the joint conference and workshop poster reception. The outcomes from the workshop group discussions will be included on the webscience.org website as a blog post after the conference.
Deadline for submission of position papers: 25 April 2020 Notification of acceptance/rejection: 25 May 2020 Workshop: 7 July 2020 [tentative]
- Professor Sophie Stalla-Bourdillon, Interdisciplinary Centre for Law, Internet and Culture (iCLIC), University of Southampton, Southampton, UK
- Professor Luc Moreau, Department of Informatics, King’s College London, London, UK
- Dr. Laura Carmichael, Interdisciplinary Centre for Law, Internet and Culture (iCLIC), University of Southampton, Southampton, UK
- Niko Tsakalakis, Web and Internet Science (WAIS), University of Southampton, Southampton, UK
- Dong Huynh, Department of Informatics, King’s College London, London, UK
- Dr. Ayah Helal, Department of Informatics, King’s College London, London, UK
The workshop will be held at the University of Southampton, UK
Workshop panel members
All questions about submissions should be emailed to Niko Tsakalakis N.Tsakalakis@southampton.ac.uk