Sie befinden Sich nicht im Netzwerk der Universität Paderborn. Der Zugriff auf elektronische Ressourcen ist gegebenenfalls nur via VPN oder Shibboleth (DFN-AAI) möglich. mehr Informationen...

Details

Autor(en) / Beteiligte
Titel
Contrastive Learning based Modality-Invariant Feature Acquisition for Robust Multimodal Emotion Recognition with Missing Modalities
Ist Teil von
  • IEEE transactions on affective computing, 2024-03, p.1-18
Ort / Verlag
IEEE
Erscheinungsjahr
2024
Link zum Volltext
Quelle
IEEEXplore
Beschreibungen/Notizen
  • Multimodal emotion recognition (MER) aims to understand the way that humans express their emotions by exploring complementary information across modalities. However, it is hard to guarantee that full-modality data is always available in real-world scenarios. To deal with missing modalities, researchers focused on meaningful joint multimodal representation learning during cross-modal missing modality imagination. However, the cross-modal imagination mechanism is highly susceptible to errors due to the "modality gap" issue, which affects the imagination accuracy, thus, the final recognition performance. To this end, we introduce the concept of a modality-invariant feature into the missing modality imagination network, which contains two key modules: 1) a novel contrastive learning-based module to extract modality-invariant features under full modalities; 2) a robust imagination module based on imagined invariant features to reconstruct missing information under missing conditions. Finally, we incorporate imagined and available modalities for emotion recognition. Experimental results on benchmark datasets demonstrate that our proposed method outperforms existing state-of-the-art strategies. Compared with our previous work, our extended version is more effective on multimodal emotion recognition with missing modalities. The code is released at https://github.com/ZhuoYulang/CIF-MMIN .

Weiterführende Literatur

Empfehlungen zum selben Thema automatisch vorgeschlagen von bX