Sie befinden Sich nicht im Netzwerk der Universität Paderborn. Der Zugriff auf elektronische Ressourcen ist gegebenenfalls nur via VPN oder Shibboleth (DFN-AAI) möglich. mehr Informationen...
This paper introduces and evaluates a novel hybrid technique that fuses two eye-tracking methodologies: photosensor oculography and video oculography. The main concept of the technique is to use a few fast and power-economic photosensors as the core mechanism for performing high speed eye-tracking, while in parallel, operate a video sensor at low sampling rate (snapshot mode) to perform dead-reckoning error correction when sensor shifts occur. We present and evaluate the functional components of the proposed technique using model-based simulation. Our experiments are performed for different scenarios involving combinations of horizontal and vertical eye movements and sensor shifts. Our evaluation shows that the proposed technique can be used to provide robustness to sensor shifts that otherwise could induce error larger than 5°. Our analysis suggests that the technique can be potentially employed to enable high speed eye tracking at low power profiles, making it suitable for use in emerging head-mounted devices, e.g., AR/VR headsets.