Sie befinden Sich nicht im Netzwerk der Universität Paderborn. Der Zugriff auf elektronische Ressourcen ist gegebenenfalls nur via VPN oder Shibboleth (DFN-AAI) möglich. mehr Informationen...

Details

Autor(en) / Beteiligte
Titel
Development and validation of a multi-modality fusion deep learning model for differentiating glioblastoma from solitary brain metastases
Ist Teil von
  • Zhong nan da xue xue bao. Journal of Central South University. Yi xue ban, 2024-01, Vol.49 (1), p.58-67
Ort / Verlag
China
Erscheinungsjahr
2024
Link zum Volltext
Quelle
MEDLINE
Beschreibungen/Notizen
  • Glioblastoma (GBM) and brain metastases (BMs) are the two most common malignant brain tumors in adults. Magnetic resonance imaging (MRI) is a commonly used method for screening and evaluating the prognosis of brain tumors, but the specificity and sensitivity of conventional MRI sequences in differential diagnosis of GBM and BMs are limited. In recent years, deep neural network has shown great potential in the realization of diagnostic classification and the establishment of clinical decision support system. This study aims to apply the radiomics features extracted by deep learning techniques to explore the feasibility of accurate preoperative classification for newly diagnosed GBM and solitary brain metastases (SBMs), and to further explore the impact of multimodality data fusion on classification tasks. Standard protocol cranial MRI sequence data from 135 newly diagnosed GBM patients and 73 patients with SBMs confirmed by histopathologic or clinical diagnosis were retrospectively analyzed. First, structural T -weight, T C-weight, and T -weight were selected as 3 inputs to the entire model, regions of interest (ROIs) were manually delineated on the registered three modal MR images, and multimodality radiomics features were obtained, dimensions were reduced using a random forest (RF)-based feature selection method, and the importance of each feature was further analyzed. Secondly, we used the method of contrast disentangled to find the shared features and complementary features between different modal features. Finally, the response of each sample to GBM and SBMs was predicted by fusing 2 features from different modalities. The radiomics features using machine learning and the multi-modal fusion method had a good discriminatory ability for GBM and SBMs. Furthermore, compared with single-modal data, the multimodal fusion models using machine learning algorithms such as support vector machine (SVM), Logistic regression, RF, adaptive boosting (AdaBoost), and gradient boosting decision tree (GBDT) achieved significant improvements, with area under the curve (AUC) values of 0.974, 0.978, 0.943, 0.938, and 0.947, respectively; our comparative disentangled multi-modal MR fusion method performs well, and the results of AUC, accuracy (ACC), sensitivity (SEN) and specificity(SPE) in the test set were 0.985, 0.984, 0.900, and 0.990, respectively. Compared with other multi-modal fusion methods, AUC, ACC, and SEN in this study all achieved the best performance. In the ablation experiment to verify the effects of each module component in this study, AUC, ACC, and SEN increased by 1.6%, 10.9% and 15.0%, respectively after 3 loss functions were used simultaneously. A deep learning-based contrast disentangled multi-modal MR radiomics feature fusion technique helps to improve GBM and SBMs classification accuracy.
Sprache
Englisch; Chinesisch
Identifikatoren
ISSN: 1672-7347
DOI: 10.11817/j.issn.1672-7347.2024.230248
Titel-ID: cdi_pubmed_primary_38615167

Weiterführende Literatur

Empfehlungen zum selben Thema automatisch vorgeschlagen von bX