Sie befinden Sich nicht im Netzwerk der Universität Paderborn. Der Zugriff auf elektronische Ressourcen ist gegebenenfalls nur via VPN oder Shibboleth (DFN-AAI) möglich. mehr Informationen...
Ergebnis 22 von 39983
IEEE transactions on circuits and systems for video technology, 2017-07, Vol.27 (7), p.1540-1554
2017

Details

Autor(en) / Beteiligte
Titel
Video-Based Human Walking Estimation Using Joint Gait and Pose Manifolds
Ist Teil von
  • IEEE transactions on circuits and systems for video technology, 2017-07, Vol.27 (7), p.1540-1554
Ort / Verlag
New York: IEEE
Erscheinungsjahr
2017
Link zum Volltext
Quelle
IEEE Electronic Library (IEL)
Beschreibungen/Notizen
  • We study two fundamental issues about video-based human walking estimation, where the goal is to estimate 3D gait kinematics (i.e., joint positions) from 2D gait appearances (i.e., silhouettes). One is how to model the gait kinematics from different walking styles, and the other is how to represent the gait appearances captured under different views and from individuals of distinct walking styles and body shapes. Our research is conducted in three steps. First, we propose the idea of joint gait-pose manifold (JGPM), which represents gait kinematics by coupling two nonlinear variables, pose (a specific walking stage) and gait (a particular walking style) in a unified latent space. We extend the Gaussian process latent variable model (GPLVM) for JGPM learning, where two heuristic topological priors, a torus and a cylinder, are considered and several JGPMs of different degrees of freedom (DoFs) are introduced for comparative analysis. Second, we develop a validation technique and a series of benchmark tests to evaluate multiple JGPMs and recent GPLVMs in terms of their performance for gait motion modeling. It is shown that the toroidal prior is slightly better than the cylindrical one, and the JGPM of 4 DoFs that balances the toroidal prior with the intrinsic data structure achieves the best performance. Third, a JGPM-based visual gait generative model (JGPM-VGGM) is developed, where JGPM plays a central role to bridge the gap between the gait appearances and the gait kinematics. Our proposed JGPM-VGGM is learned from Carnegie Mellon University MoCap data and tested on the HumanEva-I and HumanEva-II data sets. Our experimental results demonstrate the effectiveness and competitiveness of our algorithms compared with existing algorithms.

Weiterführende Literatur

Empfehlungen zum selben Thema automatisch vorgeschlagen von bX