Sie befinden Sich nicht im Netzwerk der Universität Paderborn. Der Zugriff auf elektronische Ressourcen ist gegebenenfalls nur via VPN oder Shibboleth (DFN-AAI) möglich. mehr Informationen...
Ergebnis 7 von 46
2023 International Joint Conference on Neural Networks (IJCNN), 2023, p.1-8
2023
Volltextzugriff (PDF)

Details

Autor(en) / Beteiligte
Titel
Knowledge Distillation with Deep Supervision
Ist Teil von
  • 2023 International Joint Conference on Neural Networks (IJCNN), 2023, p.1-8
Ort / Verlag
IEEE
Erscheinungsjahr
2023
Quelle
IEEE/IET Electronic Library
Beschreibungen/Notizen
  • Knowledge distillation aims to enhance the performance of a lightweight student model by exploiting the knowledge from a pre-trained cumbersome teacher model. However, in the traditional knowledge distillation, teacher predictions are only used to provide the supervisory signal for the last layer of the student model, which may result in those shallow student layers lacking accurate training guidance in the layer-by-layer back propagation and thus hinders effective knowledge transfer. To address this issue, we propose Deeply-Supervised Knowledge Distillation (DSKD), which fully utilizes class predictions and feature maps of the teacher model to supervise the training of shallow student layers. A loss-based weight allocation strategy is developed in DSKD to adaptively balance the learning process of each shallow layer, so as to further improve the student performance. Extensive experiments on CIFAR-100 and TinyImageNet with various teacher-student models show significantly performance, confirming the effectiveness of our proposed method. Code is available at: https://github.com/luoshiya/DSKD
Sprache
Englisch
Identifikatoren
eISSN: 2161-4407
DOI: 10.1109/IJCNN54540.2023.10191309
Titel-ID: cdi_ieee_primary_10191309

Weiterführende Literatur

Empfehlungen zum selben Thema automatisch vorgeschlagen von bX