Sie befinden Sich nicht im Netzwerk der Universität Paderborn. Der Zugriff auf elektronische Ressourcen ist gegebenenfalls nur via VPN oder Shibboleth (DFN-AAI) möglich. mehr Informationen...
Ergebnis 3 von 13
International journal for computer assisted radiology and surgery, 2021-09, Vol.16 (9), p.1435-1445
2021

Details

Autor(en) / Beteiligte
Titel
Real-time deep learning semantic segmentation during intra-operative surgery for 3D augmented reality assistance
Ist Teil von
  • International journal for computer assisted radiology and surgery, 2021-09, Vol.16 (9), p.1435-1445
Ort / Verlag
Cham: Springer International Publishing
Erscheinungsjahr
2021
Link zum Volltext
Quelle
Alma/SFX Local Collection
Beschreibungen/Notizen
  • Purpose The current study aimed to propose a Deep Learning (DL) and Augmented Reality (AR) based solution for a in-vivo robot-assisted radical prostatectomy (RARP), to improve the precision of a published work from our group. We implemented a two-steps automatic system to align a 3D virtual ad-hoc model of a patient’s organ with its 2D endoscopic image, to assist surgeons during the procedure. Methods This approach was carried out using a Convolutional Neural Network (CNN) based structure for semantic segmentation and a subsequent elaboration of the obtained output, which produced the needed parameters for attaching the 3D model. We used a dataset obtained from 5 endoscopic videos ( A, B, C, D, E ), selected and tagged by our team’s specialists. We then evaluated the most performing couple of segmentation architecture and neural network and tested the overlay performances. Results U-Net stood out as the most effecting architectures for segmentation. ResNet and MobileNet obtained similar Intersection over Unit (IoU) results but MobileNet was able to elaborate almost twice operations per seconds. This segmentation technique outperformed the results from the former work, obtaining an average IoU for the catheter of 0.894 ( σ = 0.076) compared to 0.339 ( σ = 0.195). This modifications lead to an improvement also in the 3D overlay performances, in particular in the Euclidean Distance between the predicted and actual model’s anchor point, from 12.569 ( σ = 4.456) to 4.160 ( σ = 1.448) and in the Geodesic Distance between the predicted and actual model’s rotations, from 0.266 ( σ = 0.131) to 0.169 ( σ = 0.073). Conclusion This work is a further step through the adoption of DL and AR in the surgery domain. In future works, we will overcome the limits of this approach and finally improve every step of the surgical procedure.

Weiterführende Literatur

Empfehlungen zum selben Thema automatisch vorgeschlagen von bX