{"title":"A deep learning-driven method for safe and effective ERCP cannulation.","authors":"Yuying Liu, Xin Chen, Siyang Zuo","doi":"10.1007/s11548-025-03329-w","DOIUrl":null,"url":null,"abstract":"<p><strong>Purpose: </strong>In recent years, the detection of the duodenal papilla and surgical cannula has become a critical task in computer-assisted endoscopic retrograde cholangiopancreatography (ERCP) cannulation operations. The complex surgical anatomy, coupled with the small size of the duodenal papillary orifice and its high similarity to the background, poses significant challenges to effective computer-assisted cannulation. To address these challenges, we present a deep learning-driven graphical user interface (GUI) to assist ERCP cannulation.</p><p><strong>Methods: </strong>Considering the characteristics of the ERCP scenario, we propose a deep learning method for duodenal papilla and surgical cannula detection, utilizing four swin transformer decoupled heads (4STDH). Four different prediction heads are employed to detect objects of different sizes. Subsequently, we integrate the swin transformer module to identify attention regions to explore prediction potential deeply. Moreover, we decouple the classification and regression networks, significantly improving the model's accuracy and robustness through the separation prediction. Simultaneously, we introduce a dataset on papilla and cannula (DPAC), consisting of 1840 annotated endoscopic images, which will be publicly available. We integrated 4STDH and several state-of-the-art methods into the GUI and compared them.</p><p><strong>Results: </strong>On the DPAC dataset, 4STDH outperforms state-of-the-art methods with an mAP of 93.2% and superior generalization performance. Additionally, the GUI provides real-time positions of the papilla and cannula, along with the planar distance and direction required for the cannula to reach the cannulation position.</p><p><strong>Conclusion: </strong>We validate the GUI's performance in human gastrointestinal endoscopic videos, showing deep learning's potential to enhance the safety and efficiency of clinical ERCP cannulation.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3000,"publicationDate":"2025-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Computer Assisted Radiology and Surgery","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1007/s11548-025-03329-w","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 0
Abstract
Purpose: In recent years, the detection of the duodenal papilla and surgical cannula has become a critical task in computer-assisted endoscopic retrograde cholangiopancreatography (ERCP) cannulation operations. The complex surgical anatomy, coupled with the small size of the duodenal papillary orifice and its high similarity to the background, poses significant challenges to effective computer-assisted cannulation. To address these challenges, we present a deep learning-driven graphical user interface (GUI) to assist ERCP cannulation.
Methods: Considering the characteristics of the ERCP scenario, we propose a deep learning method for duodenal papilla and surgical cannula detection, utilizing four swin transformer decoupled heads (4STDH). Four different prediction heads are employed to detect objects of different sizes. Subsequently, we integrate the swin transformer module to identify attention regions to explore prediction potential deeply. Moreover, we decouple the classification and regression networks, significantly improving the model's accuracy and robustness through the separation prediction. Simultaneously, we introduce a dataset on papilla and cannula (DPAC), consisting of 1840 annotated endoscopic images, which will be publicly available. We integrated 4STDH and several state-of-the-art methods into the GUI and compared them.
Results: On the DPAC dataset, 4STDH outperforms state-of-the-art methods with an mAP of 93.2% and superior generalization performance. Additionally, the GUI provides real-time positions of the papilla and cannula, along with the planar distance and direction required for the cannula to reach the cannulation position.
Conclusion: We validate the GUI's performance in human gastrointestinal endoscopic videos, showing deep learning's potential to enhance the safety and efficiency of clinical ERCP cannulation.
期刊介绍:
The International Journal for Computer Assisted Radiology and Surgery (IJCARS) is a peer-reviewed journal that provides a platform for closing the gap between medical and technical disciplines, and encourages interdisciplinary research and development activities in an international environment.