Sie befinden Sich nicht im Netzwerk der Universität Paderborn. Der Zugriff auf elektronische Ressourcen ist gegebenenfalls nur via VPN oder Shibboleth (DFN-AAI) möglich. mehr Informationen...
Ergebnis 16 von 48
IEEE transactions on intelligent transportation systems, 2022-08, Vol.23 (8), p.13242-13254
2022
Volltextzugriff (PDF)

Details

Autor(en) / Beteiligte
Titel
Bidirectional Posture-Appearance Interaction Network for Driver Behavior Recognition
Ist Teil von
  • IEEE transactions on intelligent transportation systems, 2022-08, Vol.23 (8), p.13242-13254
Ort / Verlag
New York: IEEE
Erscheinungsjahr
2022
Quelle
IEEE/IET Electronic Library (IEL)
Beschreibungen/Notizen
  • Driver behavior recognition has become one of the most important tasks for intelligent vehicles. This task, however, is very challenging since the background contents in real-world driving scenarios are often very complex. More critically, the difference between driving behaviors is often very minor, making it extremely difficult to distinguish them. Existing methods often rely only on RGB frames (or skeleton data), which may fail to capture the minor differences between behaviors and appearance information of objects simultaneously and thus fail to achieve promising performance. To address the above issues, in this paper, we propose a bidirectional posture-appearance interaction network (BPAI-Net), which simultaneously considers RGB frames and skeleton ( i.e. , posture) data for driver behavior recognition. Specifically, we propose a posture-guided convolutional neural network (PG-CNN) and an appearance-guided graph convolutional network (AG-GCN) to extract appearance and posture features, respectively. To exploit the complementary information between appearance and posture, we use the appearance features from PG-CNN for guiding AG-GCN to exploit the contextual information ( e.g. , nearby objects) to enhance posture features. Then, we use the enhanced posture features from AG-GCN to help PG-CNN focus on critical local areas of video frames that are related to driver behaviors. In this sense, we are able to use the interaction between two modalities to extract more discriminative features and thus improve the recognition accuracy. Experimental results on Drive&Act dataset show that our method outperforms state-of-the-art methods by a large margin (67.83% vs. 63.64%). Furthermore, we collect a bus driver behavior recognition dataset and yield consistent performance gain against baseline methods, demonstrating the effectiveness of our method in real-world applications. The source code and trained models are available at github.com/SCUT-AILab/BPAI-Net/.
Sprache
Englisch
Identifikatoren
ISSN: 1524-9050
eISSN: 1558-0016
DOI: 10.1109/TITS.2021.3123127
Titel-ID: cdi_crossref_primary_10_1109_TITS_2021_3123127

Weiterführende Literatur

Empfehlungen zum selben Thema automatisch vorgeschlagen von bX