Sie befinden Sich nicht im Netzwerk der Universität Paderborn. Der Zugriff auf elektronische Ressourcen ist gegebenenfalls nur via VPN oder Shibboleth (DFN-AAI) möglich. mehr Informationen...
Ergebnis 8 von 69427
Pattern recognition letters, 2023-02, Vol.166, p.53-60
2023

Details

Autor(en) / Beteiligte
Titel
Jigsaw-ViT: Learning jigsaw puzzles in vision transformer
Ist Teil von
  • Pattern recognition letters, 2023-02, Vol.166, p.53-60
Ort / Verlag
Elsevier B.V
Erscheinungsjahr
2023
Link zum Volltext
Quelle
Elsevier ScienceDirect Journals Complete
Beschreibungen/Notizen
  • •Introduce jigsaw puzzle solving auxiliary loss into vision transformer-based models.•Removing positional embeddings, randomly masking patches as techniques.•Improve vision transformers’ generalization on large-scale image classification.•Improve vision transformers’ robustness against label noise.•Improve vision transformers’ robustness against adversarial examples. The success of Vision Transformer (ViT) in various computer vision tasks has promoted the ever-increasing prevalence of this convolution-free network. The fact that ViT works on image patches makes it potentially relevant to the problem of jigsaw puzzle solving, which is a classical self-supervised task aiming at reordering shuffled sequential image patches back to their original form. Solving jigsaw puzzle has been demonstrated to be helpful for diverse tasks using Convolutional Neural Networks (CNNs), such as feature representation learning, domain generalization and fine-grained classification. In this paper, we explore solving jigsaw puzzle as a self-supervised auxiliary loss in ViT for image classification, named Jigsaw-ViT. We show two modifications that can make Jigsaw-ViT superior to standard ViT: discarding positional embeddings and masking patches randomly. Yet simple, we find that the proposed Jigsaw-ViT is able to improve on both generalization and robustness over the standard ViT, which is usually rather a trade-off. Numerical experiments verify that adding the jigsaw puzzle branch provides better generalization to ViT on large-scale image classification on ImageNet. Moreover, such auxiliary loss also improves robustness against noisy labels on Animal-10N, Food-101N, and Clothing1M, as well as adversarial examples. Our implementation is available at https://yingyichen-cyy.github.io/Jigsaw-ViT.
Sprache
Englisch
Identifikatoren
ISSN: 0167-8655
eISSN: 1872-7344
DOI: 10.1016/j.patrec.2022.12.023
Titel-ID: cdi_crossref_primary_10_1016_j_patrec_2022_12_023

Weiterführende Literatur

Empfehlungen zum selben Thema automatisch vorgeschlagen von bX