Don’t Forget Your Roots! Using Provenance Data for Transparent and Explainable Development of Machine Learning Models
Explaining reasoning and behaviour of artificial intelligent systems to human users becomes increasingly urgent, especially in the field of machine learning (ML). Many recent contributions approach that issue with post-hoc methods, meaning they consider the final system and its outcomes, while the roots of included artefacts are widely neglected. However, as ML models are inherently opaque for the human eye, specific design decisions and meta information that accrue during the development are highly relevant for explainability.
There are two main aspects shaping the position presented in this paper: First, there needs to be a stronger focus on the development of ML-based systems. To provide appropriate explanations of complex learning systems process transparency should be encouraged. Second, we suggest applying methods form the field of software provenance to improve both transparency and explainability. We already took initial steps towards the realisation of this proposal.
Fri 15 Nov
|14:00 - 14:30|
|Framework for Trustworthy Software Development|
|14:30 - 15:00|
|Don’t Forget Your Roots! Using Provenance Data for Transparent and Explainable Development of Machine Learning Models|
|15:00 - 15:30|
|Working Group Formation|