Sie befinden Sich nicht im Netzwerk der Universität Paderborn. Der Zugriff auf elektronische Ressourcen ist gegebenenfalls nur via VPN oder Shibboleth (DFN-AAI) möglich. mehr Informationen...
IEEE transactions on pattern analysis and machine intelligence, 2021-01, Vol.43 (1), p.220-237
2021
Volltextzugriff (PDF)

Details

Autor(en) / Beteiligte
Titel
Revisiting Video Saliency Prediction in the Deep Learning Era
Ist Teil von
  • IEEE transactions on pattern analysis and machine intelligence, 2021-01, Vol.43 (1), p.220-237
Ort / Verlag
United States: IEEE
Erscheinungsjahr
2021
Quelle
IEL
Beschreibungen/Notizen
  • Predicting where people look in static scenes, a.k.a visual saliency, has received significant research interest recently. However, relatively less effort has been spent in understanding and modeling visual attention over dynamic scenes. This work makes three contributions to video saliency research. First, we introduce a new benchmark, called DHF1K (Dynamic Human Fixation 1K), for predicting fixations during dynamic scene free-viewing, which is a long-time need in this field. DHF1K consists of 1K high-quality elaborately-selected video sequences annotated by 17 observers using an eye tracker device. The videos span a wide range of scenes, motions, object types and backgrounds. Second, we propose a novel video saliency model, called ACLNet (Attentive CNN-LSTM Network), that augments the CNN-LSTM architecture with a supervised attention mechanism to enable fast end-to-end saliency learning. The attention mechanism explicitly encodes static saliency information, thus allowing LSTM to focus on learning a more flexible temporal saliency representation across successive frames. Such a design fully leverages existing large-scale static fixation datasets, avoids overfitting, and significantly improves training efficiency and testing performance. Third, we perform an extensive evaluation of the state-of-the-art saliency models on three datasets : DHF1K, Hollywood-2, and UCF sports. An attribute-based analysis of previous saliency models and cross-dataset generalization are also presented. Experimental results over more than 1.2K testing videos containing 400K frames demonstrate that ACLNet outperforms other contenders and has a fast processing speed (40 fps using a single GPU). Our code and all the results are available at https://github.com/wenguanwang/DHF1K.
Sprache
Englisch
Identifikatoren
ISSN: 0162-8828
eISSN: 1939-3539, 2160-9292
DOI: 10.1109/TPAMI.2019.2924417
Titel-ID: cdi_proquest_miscellaneous_2250617725

Weiterführende Literatur

Empfehlungen zum selben Thema automatisch vorgeschlagen von bX