Blogs (1) >>
ASE 2019
Sun 10 - Fri 15 November 2019 San Diego, California, United States
Fri 15 Nov 2019 14:30 - 15:00 at Cortez 1B - Explainability and ML Chair(s): Albert Zündorf

Explaining reasoning and behaviour of artificial intelligent systems to human users becomes increasingly urgent, especially in the field of machine learning (ML). Many recent contributions approach that issue with post-hoc methods, meaning they consider the final system and its outcomes, while the roots of included artefacts are widely neglected. However, as ML models are inherently opaque for the human eye, specific design decisions and meta information that accrue during the development are highly relevant for explainability.

There are two main aspects shaping the position presented in this paper: First, there needs to be a stronger focus on the development of ML-based systems. To provide appropriate explanations of complex learning systems process transparency should be encouraged. Second, we suggest applying methods form the field of software provenance to improve both transparency and explainability. We already took initial steps towards the realisation of this proposal.

Fri 15 Nov

14:00 - 15:30: EXPLAIN 2019 - Explainability and ML at Cortez 1B
Chair(s): Albert ZündorfKassel University
explain-2019-papers14:00 - 14:30
Framework for Trustworthy Software Development
Jagadeesh Chandra Bose R PAccenture, Kapil SingiAccenture, Vikrant KaulgudAccenture Labs, India, Kanchanjot Kaur PhokelaAccenture, Sanjay PodderAccenture
explain-2019-papers14:30 - 15:00
Don’t Forget Your Roots! Using Provenance Data for Transparent and Explainable Development of Machine Learning Models
Sophie F. JentzschDLR, Nico HochgeschwenderGerman Aerospace Center
explain-2019-papers15:00 - 15:30
Working Group Formation