Yeche, Hugo, Harrison, Justin, Berthier, and Tess Berthier. Interpretability of Machine Intelligence in Medical Image Computing and Multimodal Learning for Clinical Decision Support, Second International Workshop, iMIMIC 2019, and 9th International Workshop, ML-CDS 2019, Held in Conjunction with MICCAI 2019, Shenzhen, China, October 17, 2019, Proceedings (pp.12-20).
Understanding predictions in Deep Learning (DL) models is crucial for domain experts without any DL expertise in order to justify resultant decision-making process. As of today, medical models are often based on hand-crafted features such as radiomics, though their link with neural network features remains unclear. To address the lack of interpretability, approaches based on human-understandable concepts such as TCAV have been introduced. These methods have shown promising results, though they are unsuited for continuous value concepts and their introduced metrics do not adapt well to high-dimensional spaces. To bridge the gap with radiomics-based models, we implement a regression concept vector showing the impact of radiomic features on the predictions of deep networks. In addition, we introduce a new metric with improved scaling to high-dimensional spaces, allowing comparison across multiple layers.