Sie befinden Sich nicht im Netzwerk der Universität Paderborn. Der Zugriff auf elektronische Ressourcen ist gegebenenfalls nur via VPN oder Shibboleth (DFN-AAI) möglich. mehr Informationen...
Ergebnis 24 von 130
IEEE signal processing letters, 2023, Vol.30, p.1377-1381
2023
Volltextzugriff (PDF)

Details

Autor(en) / Beteiligte
Titel
Adversarial Attacks on Deep Learning-Based DOA Estimation With Covariance Input
Ist Teil von
  • IEEE signal processing letters, 2023, Vol.30, p.1377-1381
Ort / Verlag
New York: IEEE
Erscheinungsjahr
2023
Quelle
IEEE Electronic Library (IEL)
Beschreibungen/Notizen
  • Although deep learning methods have made significant advancements across various domains, recent research has shown that carefully crafted adversarial samples can lead to a significant degradation in the performance of deep learning models. Such adversarial examples raise concerns about the reliability and safety of deep learning-based models. Currently, there is a lack of research on the robustness of deep learning-based DOA methods against adversarial samples. This letter aims to fill this research gap by leveraging the differentiability of the transformation process from the original signal to the covariance matrix. By utilizing this differentiability, the robustness of the DOA estimation model, which takes the covariance matrix as input, is investigated. Four different white-box attack methods are considered to generate adversarial samples to evaluate the resilience of the model. The experimental results demonstrate that all four methods employed significantly increase the estimation error of the DOA estimation model, posing a serious threat to the model's security.
Sprache
Englisch
Identifikatoren
ISSN: 1070-9908
eISSN: 1558-2361
DOI: 10.1109/LSP.2023.3321557
Titel-ID: cdi_crossref_primary_10_1109_LSP_2023_3321557

Weiterführende Literatur

Empfehlungen zum selben Thema automatisch vorgeschlagen von bX