Sie befinden Sich nicht im Netzwerk der Universität Paderborn. Der Zugriff auf elektronische Ressourcen ist gegebenenfalls nur via VPN oder Shibboleth (DFN-AAI) möglich. mehr Informationen...
Ergebnis 21 von 75
MILCOM 2018 - 2018 IEEE Military Communications Conference (MILCOM), 2018, p.456-461
2018
Volltextzugriff (PDF)

Details

Autor(en) / Beteiligte
Titel
Fooling a Neural Network in Military Environments: Random Untargeted Adversarial Example
Ist Teil von
  • MILCOM 2018 - 2018 IEEE Military Communications Conference (MILCOM), 2018, p.456-461
Ort / Verlag
IEEE
Erscheinungsjahr
2018
Quelle
IEEE Xplore
Beschreibungen/Notizen
  • Deep neural networks (DNNs) show superior performance in machine learning tasks such as image recognition, speech recognition, intrusion detection, and pattern analysis. However, an adversarial example, created by adding a little noise to the original sample, can cause misclassification by the DNN. As adversarial examples are a serious threat to DNNs, there has been much research into the generation of adversarial examples designed for attacking DNNs. The adversarial example attack is divided into two categories: targeted adversarial example and untargeted adversarial example. The targeted adversarial example attack causes machines to misinterpret an object as the attacker's desired class. In contrast, the untargeted adversarial example causes machines to misinterpret an object as an incorrect class. In this paper, we focus on an untargeted adversarial example scenario because it has less distortion from the original sample and a faster learning time than a targeted adversarial example scenario. However, there is a pattern problem in generating untargeted adversarial examples: Because of the similarity between the original class and specific classes, it may be possible for the defending system to determine the original class by analyzing the output classes of the untargeted adversarial examples. To overcome this problem, we propose a new method for generating untargeted adversarial examples, one that uses an arbitrary class in the generation process. For experimental datasets, we used MNIST and CIFAR10, and the Tensorflow library was employed as the machine learning library. Through our experiment, we show that the proposed method can generate random untargeted adversarial examples that do not focus on a specific class for a given original class, while keeping distortion to a minimum (1.99 and 42.32 on MNIST and CIFAR10, respectively) and maintaining a 100% attack success rate.
Sprache
Englisch
Identifikatoren
eISSN: 2155-7586
DOI: 10.1109/MILCOM.2018.8599707
Titel-ID: cdi_ieee_primary_8599707

Weiterführende Literatur

Empfehlungen zum selben Thema automatisch vorgeschlagen von bX