Sie befinden Sich nicht im Netzwerk der Universität Paderborn. Der Zugriff auf elektronische Ressourcen ist gegebenenfalls nur via VPN oder Shibboleth (DFN-AAI) möglich. mehr Informationen...
Ergebnis 7 von 443

Details

Autor(en) / Beteiligte
Titel
Connectome-based machine learning models are vulnerable to subtle data manipulations
Ist Teil von
  • Patterns (New York, N.Y.), 2023-07, Vol.4 (7), p.100756-100756, Article 100756
Ort / Verlag
United States: Elsevier Inc
Erscheinungsjahr
2023
Link zum Volltext
Quelle
Alma/SFX Local Collection
Beschreibungen/Notizen
  • Neuroimaging-based predictive models continue to improve in performance, yet a widely overlooked aspect of these models is “trustworthiness,” or robustness to data manipulations. High trustworthiness is imperative for researchers to have confidence in their findings and interpretations. In this work, we used functional connectomes to explore how minor data manipulations influence machine learning predictions. These manipulations included a method to falsely enhance prediction performance and adversarial noise attacks designed to degrade performance. Although these data manipulations drastically changed model performance, the original and manipulated data were extremely similar (r = 0.99) and did not affect other downstream analysis. Essentially, connectome data could be inconspicuously modified to achieve any desired prediction performance. Overall, our enhancement attacks and evaluation of existing adversarial noise attacks in connectome-based models highlight the need for counter-measures that improve the trustworthiness to preserve the integrity of academic research and any potential translational applications. [Display omitted] •Enhancement attacks falsely improve the performance of connectome-based models•Adversarial attacks degrade the performance of connectome-based models•Subtle data manipulations lead to large changes in performance In recent years, machine learning models using brain functional connectivity have furthered our knowledge of brain-behavior relationships. The trustworthiness of these models has not yet been explored, and determining the extent to which data can be manipulated to change the results is a crucial step in understanding their trustworthiness. Here, we showed that only minor manipulations of the data could lead to drastically different performance. Although this work focuses on machine learning models using brain functional connectivity data, the concepts investigated here apply to any scientific research that uses machine learning, especially with high-dimensional data. As machine learning becomes increasingly popular in many fields of scientific research, data manipulations may become a major obstacle to the integrity of scientific machine learning. Imperceptible data manipulations can drastically increase or decrease performance in machine learning models that use high-dimensional neuroimaging data. These manipulations could achieve nearly any desired prediction performance without noticeable changes to the data or any changes in other downstream analyses. The feasibility of data manipulations highlights the susceptibility of data sharing and scientific machine learning pipelines to fraudulent behavior.
Sprache
Englisch
Identifikatoren
ISSN: 2666-3899
eISSN: 2666-3899
DOI: 10.1016/j.patter.2023.100756
Titel-ID: cdi_pubmedcentral_primary_oai_pubmedcentral_nih_gov_10382940

Weiterführende Literatur

Empfehlungen zum selben Thema automatisch vorgeschlagen von bX