Sie befinden Sich nicht im Netzwerk der Universität Paderborn. Der Zugriff auf elektronische Ressourcen ist gegebenenfalls nur via VPN oder Shibboleth (DFN-AAI) möglich. mehr Informationen...
Ergebnis 10 von 23
2019 International Conference on Artificial Intelligence in Information and Communication (ICAIIC), 2019, p.399-404
2019
Volltextzugriff (PDF)

Details

Autor(en) / Beteiligte
Titel
Priority Adversarial Example in Evasion Attack on Multiple Deep Neural Networks
Ist Teil von
  • 2019 International Conference on Artificial Intelligence in Information and Communication (ICAIIC), 2019, p.399-404
Ort / Verlag
IEEE
Erscheinungsjahr
2019
Quelle
IEEE Electronic Library Online
Beschreibungen/Notizen
  • Deep neural networks (DNNs) provide superior per-formance on machine learning tasks such as image recognition, speech recognition, pattern recognition, and intrusion detection. However, an adversarial example created by adding a little noise to the original data can lead to misclassification by the DNN, and the human eye cannot detect the difference from the original data. For example, if an attacker generates a modified left-turn road sign to be incorrectly categorized by a DNN, an autonomous vehicle with the DNN will incorrect classify the modified left-turn road sign as a right-turn sign, whereas a human will correctly classify the modified sign as a left-turn sign. Such an adversarial example is a serious threat to a DNN. Recently, a multi-target adversarial example was introduced that causes misclassification by several models within each target class using a single modified image. However, it has the vulnerability that as the number of target models increases, the overall attack success rate is reduced. Therefore, if there are several models that the attacker wishes to target, the attacker needs to control the attack success rate for each model by considering the attack priority for each model. In this paper, we propose a priority adversarial example that considers the attack priority for each model in cases targeting several models. The proposed method controls the attack success rate for each model by adjusting the weight of the attack function in the generation process, while maintaining minimum distortion. We used Tensorflow, a widely used machine learning library, and MNIST as the dataset. Experimental results show that the proposed method can control the attack success rate for each model by considering the attack priority of each model while maintaining minimum distortion (on average 3.95 and 2.45 in targeted and untargeted attacks, respectively).
Sprache
Englisch
Identifikatoren
DOI: 10.1109/ICAIIC.2019.8669034
Titel-ID: cdi_ieee_primary_8669034

Weiterführende Literatur

Empfehlungen zum selben Thema automatisch vorgeschlagen von bX