Sie befinden Sich nicht im Netzwerk der Universität Paderborn. Der Zugriff auf elektronische Ressourcen ist gegebenenfalls nur via VPN oder Shibboleth (DFN-AAI) möglich. mehr Informationen...
Ergebnis 4 von 9035
Machine vision and applications, 2013-07, Vol.24 (5), p.971-981
2013
Volltextzugriff (PDF)

Details

Autor(en) / Beteiligte
Titel
Recognizing 50 human action categories of web videos
Ist Teil von
  • Machine vision and applications, 2013-07, Vol.24 (5), p.971-981
Ort / Verlag
Berlin/Heidelberg: Springer-Verlag
Erscheinungsjahr
2013
Quelle
Alma/SFX Local Collection
Beschreibungen/Notizen
  • Action recognition on large categories of unconstrained videos taken from the web is a very challenging problem compared to datasets like KTH (6 actions), IXMAS (13 actions), and Weizmann (10 actions). Challenges like camera motion, different viewpoints, large interclass variations, cluttered background, occlusions, bad illumination conditions, and poor quality of web videos cause the majority of the state-of-the-art action recognition approaches to fail. Also, an increased number of categories and the inclusion of actions with high confusion add to the challenges. In this paper, we propose using the scene context information obtained from moving and stationary pixels in the key frames, in conjunction with motion features, to solve the action recognition problem on a large (50 actions) dataset with videos from the web. We perform a combination of early and late fusion on multiple features to handle the very large number of categories. We demonstrate that scene context is a very important feature to perform action recognition on very large datasets. The proposed method does not require any kind of video stabilization, person detection, or tracking and pruning of features. Our approach gives good performance on a large number of action categories; it has been tested on the UCF50 dataset with 50 action categories, which is an extension of the UCF YouTube Action (UCF11) dataset containing 11 action categories. We also tested our approach on the KTH and HMDB51 datasets for comparison.

Weiterführende Literatur

Empfehlungen zum selben Thema automatisch vorgeschlagen von bX