Pitfalls to Avoid when Interpreting Machine Learning Models

Author(s)
Christoph Molnar, Gunnar König, Julia Herbinger, Timo Freiesleben, Susanne Dandl, Christian A. Scholbeck, Giuseppe Casalicchio, Moritz Grosse-Wentrup, Bernd Bischl
Abstract

Modern requirements for machine learning (ML) models include both high predictive performance and model interpretability. A growing number of techniques provide model interpretations, but can lead to wrong conclusions if applied incorrectly. We illustrate pitfalls of ML model interpretation such as bad model generalization, dependent features, feature interactions or unjustified causal interpretations. Our paper addresses ML practitioners by raising awareness of pitfalls and pointing out solutions for correct model interpretation, as well as ML researchers by discussing open issues for further research. 

Organisation(s)
Research Group Neuroinformatics, Research Network Data Science, Vienna Cognitive Science Hub
External organisation(s)
Ludwig-Maximilians-Universität München, Universität Wien
No. of pages
10
DOI
https://doi.org/10.48550/arXiv.2007.04131
Publication date
07-2020
Peer reviewed
Yes
Austrian Fields of Science 2012
102019 Machine learning
Portal url
https://ucris.univie.ac.at/portal/en/publications/pitfalls-to-avoid-when-interpreting-machine-learning-models(a1564ed3-b0dc-4a0f-86fc-760b16d85d55).html