Sie befinden Sich nicht im Netzwerk der Universität Paderborn. Der Zugriff auf elektronische Ressourcen ist gegebenenfalls nur via VPN oder Shibboleth (DFN-AAI) möglich. mehr Informationen...
Deep learning techniques are widely used to model human visual saliency, to such a point that state-of-the-art performances are now only attained by deep neural networks. However, one key part of a typical deep learning model is often neglected when it comes to modeling visual saliency: the choice of the loss function.
In this work, we explore some of the most popular loss functions that are used in deep saliency models. We demonstrate that on a fixed network architecture, modifying the loss function can significantly improve (or depreciate) the results, hence emphasizing the importance of the choice of the loss function when designing a model. We also evaluate the relevance of new loss functions for saliency prediction inspired by metrics used in style-transfer tasks. Finally, we show that a linear combination of several well-chosen loss functions leads to significant improvements in performance on different datasets as well as on a different network architecture, thus demonstrating the robustness of a combined metric.