Sie befinden Sich nicht im Netzwerk der Universität Paderborn. Der Zugriff auf elektronische Ressourcen ist gegebenenfalls nur via VPN oder Shibboleth (DFN-AAI) möglich. mehr Informationen...
Ergebnis 14 von 40
2021 IEEE/CVF International Conference on Computer Vision (ICCV), 2021, p.387-396
2021

Details

Autor(en) / Beteiligte
Titel
Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers
Ist Teil von
  • 2021 IEEE/CVF International Conference on Computer Vision (ICCV), 2021, p.387-396
Ort / Verlag
IEEE
Erscheinungsjahr
2021
Link zum Volltext
Quelle
IEEE/IET Electronic Library (IEL)
Beschreibungen/Notizen
  • Transformers are increasingly dominating multi-modal reasoning tasks, such as visual question answering, achieving state-of-the-art results thanks to their ability to contextualize information using the self-attention and co-attention mechanisms. These attention modules also play a role in other computer vision tasks including object detection and image segmentation. Unlike Transformers that only use self-attention, Transformers with co-attention require to consider multiple attention maps in parallel in order to highlight the information that is relevant to the prediction in the model's input. In this work, we propose the first method to explain prediction by any Transformer-based architecture, including bi-modal Transformers and Transformers with co-attentions. We provide generic solutions and apply these to the three most commonly used of these architectures: (i) pure self-attention, (ii) self-attention combined with co-attention, and (iii) encoder-decoder attention. We show that our method is superior to all existing methods which are adapted from single modality explainability. Our code is available at: https://github.com/hila-chefer/Transformer-MM-Explainability.
Sprache
Englisch
Identifikatoren
eISSN: 2380-7504
DOI: 10.1109/ICCV48922.2021.00045
Titel-ID: cdi_ieee_primary_9710570

Weiterführende Literatur

Empfehlungen zum selben Thema automatisch vorgeschlagen von bX