Sie befinden Sich nicht im Netzwerk der Universität Paderborn. Der Zugriff auf elektronische Ressourcen ist gegebenenfalls nur via VPN oder Shibboleth (DFN-AAI) möglich. mehr Informationen...
Ergebnis 15 von 197
Computer methods and programs in biomedicine, 2022-02, Vol.214, p.106566-106566, Article 106566
2022
Volltextzugriff (PDF)

Details

Autor(en) / Beteiligte
Titel
DCACNet: Dual context aggregation and attention-guided cross deconvolution network for medical image segmentation
Ist Teil von
  • Computer methods and programs in biomedicine, 2022-02, Vol.214, p.106566-106566, Article 106566
Ort / Verlag
Ireland: Elsevier B.V
Erscheinungsjahr
2022
Quelle
Access via ScienceDirect (Elsevier)
Beschreibungen/Notizen
  • •Build a reliable deep learning network framework, named DCACNet, to accurate segmentation for medical images.•Proposes a multi-scale cross-fusion encoding network to extract features.•Build a dual context aggregation module, to fuse context features at different scales.•Proposes a attention guided cross deconvolution decoding network. Background and Objective: Segmentation is a key step in biomedical image analysis tasks. Recently, convolutional neural networks (CNNs) have been increasingly applied in the field of medical image processing; however, standard models still have some drawbacks. Due to the significant loss of spatial information at the coding stage, it is often difficult to restore the details of low-level visual features using simple deconvolution, and the generated feature maps are sparse, which results in performance degradation. This prompted us to study whether it is possible to better preserve the deep feature information of the image in order to solve the sparsity problem of image segmentation models. Methods: In this study, we (1) build a reliable deep learning network framework, named DCACNet, to improve the segmentation performance for medical images; (2) propose a multiscale cross-fusion encoding network to extract features; (3) build a dual context aggregation module to fuse the context features at different scales and capture more fine-grained deep features; and (4) propose an attention-guided cross deconvolution decoding network to generate dense feature maps. We demonstrate the effectiveness of the proposed method on two publicly available datasets. Results: DCACNet was trained and tested on the prepared dataset, and the experimental results show that our proposed model has better segmentation performance than previous models. For 4-class classification (CHAOS dataset), the mean DSC coefficient reached 91.03%. For 2-class classification (Herlev dataset), the accuracy, precision, sensitivity, specificity, and Dice score reached 96.77%, 90.40%, 94.20%, 97.50%, and 97.69%, respectively. The experimental results show that DCACNet can improve the segmentation effect for medical images. Conclusion: DCACNet achieved promising results on the prepared dataset and improved segmentation performance. It can better retain the deep feature information of the image than other models and solve the sparsity problem of the medical image segmentation model.
Sprache
Englisch
Identifikatoren
ISSN: 0169-2607
eISSN: 1872-7565
DOI: 10.1016/j.cmpb.2021.106566
Titel-ID: cdi_proquest_miscellaneous_2609461570

Weiterführende Literatur

Empfehlungen zum selben Thema automatisch vorgeschlagen von bX