Sie befinden Sich nicht im Netzwerk der Universität Paderborn. Der Zugriff auf elektronische Ressourcen ist gegebenenfalls nur via VPN oder Shibboleth (DFN-AAI) möglich. mehr Informationen...
Ergebnis 13 von 4044
2023 International Workshop on Intelligent Systems (IWIS), 2023, p.1-6
2023

Details

Autor(en) / Beteiligte
Titel
Daily Living Human Activity Recognition Using Deep Neural Networks
Ist Teil von
  • 2023 International Workshop on Intelligent Systems (IWIS), 2023, p.1-6
Ort / Verlag
IEEE
Erscheinungsjahr
2023
Link zum Volltext
Quelle
IEEE Electronic Library (IEL)
Beschreibungen/Notizen
  • In the provision of social health care and supporting older individuals, the impact of action recognition is substantial. Human Activity Recognition (HAR) refers to the challenge of identifying a specific human motion or movement. Recognizing action from video data involves identifying features in spatial position and defining the temporal variations across these extracted spatial parameters. Action recognition model accepts video data modal where frames are sampled and fed as image stack to the model. Although sampled frames contribute an imperative part, they are either randomly selected or skipped sequentially. Consequently, recognizing actions in a video requires meticulously selecting keyframes that significantly ensure low computational cost. Though different intriguing methods are available to recognize activity from video data modal, only limited research addresses the issue of keyframe selection. To this extent, we proposed an approach for selecting keyframes from activity videos which assist in constructing an acceptable synopsis by ex-tracting the most informative frames. Additionally, we represent a deep learning approach combining CNN and GRU, which exploits both the learning of the spatial information and the learning of the temporal dependencies from the instances of raw data. Such representation is tested on the MSRDaily Activity3D dataset. Our proposed model outperforms some recent works of different modalities of data in terms of accuracy. The mean accuracy of the proposed model is 94.22% for the RGB video modal. Our proposed CNN-GRU model achieves 94.87% precision, 94.25% recall, and 94.31 % Fl-score.

Weiterführende Literatur

Empfehlungen zum selben Thema automatisch vorgeschlagen von bX