Ergebnis 9 von 2055188
Sie befinden Sich nicht im Netzwerk der Universität Paderborn. Der Zugriff auf elektronische Ressourcen ist gegebenenfalls nur via VPN oder Shibboleth (DFN-AAI) möglich. mehr Informationen...

Details

Autor(en) / Beteiligte
Titel
Exploring modality-shared appearance features and modality-invariant relation features for cross-modality person Re-IDentification
Ist Teil von
  • Pattern recognition, 2023-03, Vol.135, p.109145, Article 109145
Ort / Verlag
Elsevier Ltd
Erscheinungsjahr
2023
Link zum Volltext
Quelle
Elsevier ScienceDirect Journals Complete
Beschreibungen/Notizen
  • •Using the modality-shared and modality-invariant features for cross-modality Re-ID.•Designing a new model to extract modality-shared and modality-invariant features.•Introducing a new loss to further decrease cross-modality variations. [Display omitted] Most existing cross-modality person Re-IDentification works rely on discriminative modality-shared features for reducing cross-modality variations and intra-modality variations. Despite their preliminary success, such modality-shared appearance features cannot capture enough modality-invariant discriminative information due to a massive discrepancy between RGB and IR images. To address this issue, on top of appearance features, we further capture the modality-invariant relations among different person parts (referred to as modality-invariant relation features), which help to identify persons with similar appearances but different body shapes. To this end, a Multi-level Two-streamed Modality-shared Feature Extraction (MTMFE) sub-network is designed, where the modality-shared appearance features and modality-invariant relation features are first extracted in a shared 2D feature space and a shared 3D feature space, respectively. The two features are then fused into the final modality-shared features such that both cross-modality variations and intra-modality variations can be reduced. Besides, a novel cross-modality center alignment loss is proposed to further reduce the cross-modality variations. Experimental results on several benchmark datasets demonstrate that our proposed method exceeds state-of-the-art algorithms by a wide margin.
Sprache
Englisch
Identifikatoren
ISSN: 0031-3203
eISSN: 1873-5142
DOI: 10.1016/j.patcog.2022.109145
Titel-ID: cdi_crossref_primary_10_1016_j_patcog_2022_109145

Weiterführende Literatur

Empfehlungen zum selben Thema automatisch vorgeschlagen von bX