Sie befinden Sich nicht im Netzwerk der Universität Paderborn. Der Zugriff auf elektronische Ressourcen ist gegebenenfalls nur via VPN oder Shibboleth (DFN-AAI) möglich. mehr Informationen...
Ethical and Philosophical Issues in Medical Imaging, Multimodal Learning and Fusion Across Scales for Clinical Decision Support, and Topological Data Analysis for Biomedical Imaging, p.87-99
Hybrid Network Based on Cross-Modal Feature Fusion for Diagnosis of Alzheimer’s Disease
Ist Teil von
Ethical and Philosophical Issues in Medical Imaging, Multimodal Learning and Fusion Across Scales for Clinical Decision Support, and Topological Data Analysis for Biomedical Imaging, p.87-99
Ort / Verlag
Cham: Springer Nature Switzerland
Link zum Volltext
Quelle
Alma/SFX Local Collection
Beschreibungen/Notizen
Early diagnosis of Alzheimer’s disease (AD) (e.g., mild cognitive impairment, MCI), timely intervention, and treatment will effectively delay the further development of AD. Structural Magnetic Resonance Imaging (sMRI) and Positron Emission Computed Tomography (PET) play an essential role in diagnosing AD and MCI as they show signs of morphological changes in brain atrophy. However, it is difficult to learn more comprehensive information to diagnose AD and MCI thoroughly by single-modality brain imaging data, and it is challenging to locate the lesion area accurately. For diagnosing AD and MCI, convolutional neural networks (CNNs) have shown quite promising performance. However, the ability of CNNs to model global information is limited due to the properties of CNN sliding windows. In contrast, Transformer lacks modeling of local invariance, but it utilizes a self-attention mechanism to model long-term dependencies. Therefore, a novel cross-modal feature fusion based CNN-Transformer framework for AD and MCI diagnosis has been proposed. Specifically, we firstly exploit a large kernel attention (LKA) module to learn the attention map. Then, there will be two branches, CNN and Transformer, which are utilized to extract higher-level local and global features further. Furthermore, we utilize a modality feature fusion block to fuse the features of the two modalities. The extensive experimental results on the ADNI dataset show that our model outperforms the state-of-the-art methods.