Sie befinden Sich nicht im Netzwerk der Universität Paderborn. Der Zugriff auf elektronische Ressourcen ist gegebenenfalls nur via VPN oder Shibboleth (DFN-AAI) möglich. mehr Informationen...
Ergebnis 1 von 13
Computers in biology and medicine, 2020-08, Vol.123, p.103865-103865, Article 103865
2020
Volltextzugriff (PDF)

Details

Autor(en) / Beteiligte
Titel
Concept attribution: Explaining CNN decisions to physicians
Ist Teil von
  • Computers in biology and medicine, 2020-08, Vol.123, p.103865-103865, Article 103865
Ort / Verlag
United States: Elsevier Ltd
Erscheinungsjahr
2020
Quelle
Alma/SFX Local Collection
Beschreibungen/Notizen
  • Deep learning explainability is often reached by gradient-based approaches that attribute the network output to perturbations of the input pixels. However, the relevance of input pixels may be difficult to relate to relevant image features in some applications, e.g. diagnostic measures in medical imaging. The framework described in this paper shifts the attribution focus from pixel values to user-defined concepts. By checking if certain diagnostic measures are present in the learned representations, experts can explain and entrust the network output. Being post-hoc, our method does not alter the network training and can be easily plugged into the latest state-of-the-art convolutional networks. This paper presents the main components of the framework for attribution to concepts, in addition to the introduction of a spatial pooling operation on top of the feature maps to obtain a solid interpretability analysis. Furthermore, regularized regression is analyzed as a solution to the regression overfitting in high-dimensionality latent spaces. The versatility of the proposed approach is shown by experiments on two medical applications, namely histopathology and retinopathy, and on one non-medical task, the task of handwritten digit classification. The obtained explanations are in line with clinicians’ guidelines and complementary to widely used visualization tools such as saliency maps. [Display omitted] •Feature attribution explains CNNs in terms of the input pixels.•The abstraction of feature attribution to higher level impacting factors is hard.•Concept attribution explains CNNs with high-level concepts such as clinical factors.•Nuclei pleomorphism is shown as a relevant factor in breast tumor classification.•Concept attribution can match clinical expectations to the interpretability of CNNs.
Sprache
Englisch
Identifikatoren
ISSN: 0010-4825
eISSN: 1879-0534
DOI: 10.1016/j.compbiomed.2020.103865
Titel-ID: cdi_proquest_miscellaneous_2423798948

Weiterführende Literatur

Empfehlungen zum selben Thema automatisch vorgeschlagen von bX