Sie befinden Sich nicht im Netzwerk der Universität Paderborn. Der Zugriff auf elektronische Ressourcen ist gegebenenfalls nur via VPN oder Shibboleth (DFN-AAI) möglich. mehr Informationen...
Ergebnis 9 von 237
The Visual computer, 2023-11, Vol.39 (11), p.5513-5525
2023
Volltextzugriff (PDF)

Details

Autor(en) / Beteiligte
Titel
DCANet: deep context attention network for automatic polyp segmentation
Ist Teil von
  • The Visual computer, 2023-11, Vol.39 (11), p.5513-5525
Ort / Verlag
Berlin/Heidelberg: Springer Berlin Heidelberg
Erscheinungsjahr
2023
Quelle
Alma/SFX Local Collection
Beschreibungen/Notizen
  • Automatic and accurate polyp segmentation is significant for diagnosis and treatment of colorectal cancer. This is a challenging task due to the polyp’s shape and size diversity. Recently, various deep convolutional neural networks have been developed for polyp segmentation. However, most state-of-the-art methods have suffered from a poor performance in the segmentation of smaller, flat, or noisy polyp objects. In the paper, we propose a novel Deep Context Attention Network (DCANet) for accurate polyp segmentation based on an encoder–decoder framework. ResNet34 is adopted as the encoder, and five functional modules are introduced to improve the performance. Specifically, the improved local context attention module (LCAM) and global context module (GCM) are exploited to efficiently extract the local multi-scale and global multi-receptive-field context information, respectively. Then, an enhanced feature fusion module (FFM) is devised to effectively select and aggregate context features through spatial-channel attention. Finally, equipped with elaborately designed multi-attention modules (MAM), new decoder and supervision blocks are developed to accurately predict polyp regions via powerful channel-spatial-channel attention. Extensive experiments are conducted on the Kvasir-SEG and EndoScene benchmark datasets. The results demonstrate that the proposed network achieves superior performance compared to other state-of-the-art models. The source code will be available at https://github.com/ZAKAUDD/DCANet.

Weiterführende Literatur

Empfehlungen zum selben Thema automatisch vorgeschlagen von bX