Sie befinden Sich nicht im Netzwerk der Universität Paderborn. Der Zugriff auf elektronische Ressourcen ist gegebenenfalls nur via VPN oder Shibboleth (DFN-AAI) möglich. mehr Informationen...
Ergebnis 13 von 169813
2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022, p.01-10
2022

Details

Autor(en) / Beteiligte
Titel
When Does Contrastive Visual Representation Learning Work?
Ist Teil von
  • 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022, p.01-10
Ort / Verlag
IEEE
Erscheinungsjahr
2022
Link zum Volltext
Quelle
IEEE Electronic Library Online
Beschreibungen/Notizen
  • Recent self-supervised representation learning techniques have largely closed the gap between supervised and unsupervised learning on ImageNet classification. While the particulars of pretraining on ImageNet are now relatively well understood, the field still lacks widely accepted best practices for replicating this success on other datasets. As a first step in this direction, we study contrastive self-supervised learning on four diverse large-scale datasets. By looking through the lenses of data quantity, data domain, data quality, and task granularity, we provide new insights into the necessary conditions for successful self-supervised learning. Our key findings include observations such as: (i) the benefit of additional pretraining data beyond 500k images is modest, (ii) adding pretraining images from another domain does not lead to more general representations, (iii) corrupted pretraining images have a disparate impact on supervised and self-supervised pretraining, and (iv) contrastive learning lags far behind supervised learning on finegrained visual classification tasks.

Weiterführende Literatur

Empfehlungen zum selben Thema automatisch vorgeschlagen von bX