Sie befinden Sich nicht im Netzwerk der Universität Paderborn. Der Zugriff auf elektronische Ressourcen ist gegebenenfalls nur via VPN oder Shibboleth (DFN-AAI) möglich. mehr Informationen...
Ergebnis 15 von 149820
2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022, p.10915-10924
2022

Details

Autor(en) / Beteiligte
Titel
Knowledge distillation: A good teacher is patient and consistent
Ist Teil von
  • 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022, p.10915-10924
Ort / Verlag
IEEE
Erscheinungsjahr
2022
Link zum Volltext
Quelle
IEEE Xplore
Beschreibungen/Notizen
  • There is a growing discrepancy in computer vision between large-scale models that achieve state-of-the-art performance and models that are affordable in practical applications. In this paper we address this issue and significantly bridge the gap between these two types of models. Throughout our empirical investigation we do not aim to necessarily propose a new method, but strive to identify a robust and effective recipe for making state-of-the-art large scale models affordable in practice. We demonstrate that, when performed correctly, knowledge distillation can be a powerful tool for reducing the size of large models without compromising their performance. In particular, we uncover that there are certain implicit design choices, which may drastically affect the effectiveness of distillation. Our key contribution is the explicit identification of these design choices, which were not previously articulated in the literature. We back up our findings by a comprehensive empirical study, demonstrate compelling results on a wide range of vision datasets and, in particular, obtain a state-of-the-art ResNet-50 model for ImageNet, which achieves 82.8% top-1 accuracy.
Sprache
Englisch
Identifikatoren
eISSN: 2575-7075
DOI: 10.1109/CVPR52688.2022.01065
Titel-ID: cdi_ieee_primary_9879513

Weiterführende Literatur

Empfehlungen zum selben Thema automatisch vorgeschlagen von bX