Sie befinden Sich nicht im Netzwerk der Universität Paderborn. Der Zugriff auf elektronische Ressourcen ist gegebenenfalls nur via VPN oder Shibboleth (DFN-AAI) möglich. mehr Informationen...
Ergebnis 11 von 1658
IEEE transactions on pattern analysis and machine intelligence, 2017-11, Vol.39 (11), p.2270-2283
2017
Volltextzugriff (PDF)

Details

Autor(en) / Beteiligte
Titel
Supporting One-Time Point Annotations for Gesture Recognition
Ist Teil von
  • IEEE transactions on pattern analysis and machine intelligence, 2017-11, Vol.39 (11), p.2270-2283
Ort / Verlag
United States: IEEE
Erscheinungsjahr
2017
Quelle
IEEE Electronic Library Online
Beschreibungen/Notizen
  • This paper investigates a new annotation technique that reduces significantly the amount of time to annotate training data for gesture recognition. Conventionally, the annotations comprise the start and end times, and the corresponding labels of gestures in sensor recordings. In this work, we propose a one-time point annotation in which labelers do not have to select the start and end time carefully, but just mark a one-time point within the time a gesture is happening. The technique gives more freedom and reduces significantly the burden for labelers. To make the one-time point annotations applicable, we propose a novel BoundarySearch algorithm to find automatically the correct temporal boundaries of gestures by discovering data patterns around their given one-time point annotations. The corrected annotations are then used to train gesture models. We evaluate the method on three applications from wearable gesture recognition with various gesture classes (10-17 classes) recorded with different sensor modalities. The results show that training on the corrected annotations can achieve performances close to a fully supervised training on clean annotations (lower by just up to 5 percent F1-score on average). Furthermore, the BoundarySearch algorithm is also evaluated on the ChaLearn 2014 multi-modal gesture recognition challenge recorded with Kinect sensors from computer vision and achieves similar results.

Weiterführende Literatur

Empfehlungen zum selben Thema automatisch vorgeschlagen von bX