Software is the driving force behind many innovations in all aspects of human life and, thus, decisions of software have a profound effect on the society. It is therefore highly important that software and its decisions can be explained to understand the reasons for the decisions as well as discuss and possibly subsequently change the software. The EXPLAIN workshop addresses this highly relevant topic of explainable software. The workshop seeks to bring researchers from the various areas in software engineering together and to provide a forum for the exchange on challenges, research directions, and ideas on explainable software.
Please see https://explainws.github.io/ for more details about the workshop.
Fri 15 Nov
|09:00 - 09:15|
|09:15 - 10:30|
|Causality and Fairness in Software|
Yuriy BrunUniversity of Massachusetts Amherst
|11:00 - 11:30|
|Explaining Static Analysis - A Perspective|
|11:30 - 12:00|
|A Hybrid Editor for Fast Robot Mission Prototyping|
|12:00 - 12:30|
|Explaining Business Process Software with Fulib-Scenarios|
|14:00 - 14:30|
|Framework for Trustworthy Software Development|
|14:30 - 15:00|
|Don’t Forget Your Roots! Using Provenance Data for Transparent and Explainable Development of Machine Learning Models|
|15:00 - 15:30|
|Working Group Formation|
Call for Papers
Specifically, the workshop seeks contributions related but not limited to the following list of topics:
- explainable AI
- explaining algorithms and/or their decisions
- Creating a chaing of evidence
- quality assurance of learning and adaptive models and algorithms
- partial and live evaluation of source code
- explaining impact of algorithms on users and their behavior
- legal aspects of fully automatic decision making
- visualizations of algorithms and algorithm decisions
- tools support for joint decision making of humans and machines
This will be the first edition of the EXPLAIN workshop. The workshop will focus on (1) the identification of problems, e.g., how to ensure understandable explanations and what aspect of a software’s decision should be explained, (2) discussion on ideas how to provide explanations, and (3) building a community for explainable software.
We welcome 4 page research, experience report and position papers. Research papers are expected to describe new research results and make contributions to the body of knowledge in the area. Experience reports are expected to describe experiences with (amongst other things) providing, creating, and using explanations in the development, deployment, and maintenance of software. Position papers are expected to discuss controversial issues or describe interesting or thought provoking ideas thatare not yet fully developed.
All papers need to follow the general formatting guidelines and policies. Submissions not conforming to these will be desk-rejected.