Sie befinden Sich nicht im Netzwerk der Universität Paderborn. Der Zugriff auf elektronische Ressourcen ist gegebenenfalls nur via VPN oder Shibboleth (DFN-AAI) möglich. mehr Informationen...
Ergebnis 25 von 745
Proceedings of the ... AAAI Conference on Artificial Intelligence, 2021, Vol.35 (11), p.10201-10209
2021
Volltextzugriff (PDF)

Details

Autor(en) / Beteiligte
Titel
Adaptive Verifiable Training Using Pairwise Class Similarity
Ist Teil von
  • Proceedings of the ... AAAI Conference on Artificial Intelligence, 2021, Vol.35 (11), p.10201-10209
Erscheinungsjahr
2021
Beschreibungen/Notizen
  • Verifiable training has shown success in creating neural networks that are provably robust to a given amount of noise. However, despite only enforcing a single robustness criterion, its performance scales poorly with dataset complexity. On CIFAR10, a non-robust LeNet model has a 21.63% error rate, while a model created using verifiable training and a L-infinity robustness criterion of 8/255, has an error rate of 57.10%. Upon examination, we find that when labeling visually similar classes, the model's error rate is as high as 61.65%. Thus, we attribute the loss in performance to inter-class similarity. Classes that are similar (i.e., close in the feature space) increase the difficulty of learning a robust model. While it may be desirable to train a model to be robust for a large robustness region, pairwise class similarities limit the potential gains. Furthermore, consideration must be made regarding the relative cost of mistaking one class for another. In security or safety critical tasks, similar classes are likely to belong to the same group, and thus are equally sensitive. In this work, we propose a new approach that utilizes inter-class similarity to improve the performance of verifiable training and create robust models with respect to multiple adversarial criteria. First, we cluster similar classes using agglomerate clustering and assign robustness criteria based on the degree of similarity between clusters. Next, we propose two methods to apply our approach: (1) the Inter-Group Robustness Prioritization method, which uses a custom loss term to create a single model with multiple robustness guarantees and (2) the neural decision tree method, which trains multiple sub-classifiers with different robustness guarantees and combines them in a decision tree architecture. Our experiments on Fashion-MNIST and CIFAR10 demonstrate that by prioritizing the robustness between the most dissimilar groups, we improve clean performance by up to 9.63% and 30.89% respectively. Furthermore, on CIFAR100, our approach reduces the clean error rate by 26.32%.
Sprache
Englisch
Identifikatoren
ISSN: 2159-5399
eISSN: 2374-3468
DOI: 10.1609/aaai.v35i11.17223
Titel-ID: cdi_crossref_primary_10_1609_aaai_v35i11_17223
Format

Weiterführende Literatur

Empfehlungen zum selben Thema automatisch vorgeschlagen von bX