首页 > 最新文献

International Journal of Computer Assisted Radiology and Surgery最新文献

英文 中文
AURA-CVC: Autonomous Ultrasound-guided Robotic Assistance for Central Venous Catheterization. 自主超声引导机器人辅助中心静脉置管。
IF 2.3 3区 医学 Q3 ENGINEERING, BIOMEDICAL Pub Date : 2026-02-25 DOI: 10.1007/s11548-026-03572-9
Deepak Raina, Lidia Al-Zogbi, Brian Teixeira, Vivek Singh, Ankur Kapoor, Thorsten Fleiter, Muyinatu A Lediju Bell, Vinciya Pandian, Axel Krieger

Purpose: Central venous catheterization (CVC) is a critical medical procedure for vascular access, hemodynamic monitoring, and life-saving interventions. Its success remains challenging due to the need for continuous ultrasound-guided visualization of a target vessel and approaching needle, which is further complicated by anatomical variability and operator dependency. Errors in needle placement can lead to life-threatening complications. While robotic systems offer a potential solution, achieving full autonomy remains challenging. In this work, we propose an end-to-end robotic ultrasound-guided CVC pipeline, from scan initialization to needle insertion.

Methods: We introduce a deep-learning model to identify clinically relevant anatomical landmarks from a depth image of the patient's neck, obtained using an RGB-D camera, to autonomously define the scanning region and paths. Then, a robot motion planning framework is proposed to scan, segment, reconstruct, and localize vessels (veins and arteries), followed by the identification of the optimal insertion zone. Finally, a needle guidance module plans the insertion under ultrasound guidance with operator's feedback. This pipeline was validated on a high-fidelity commercial phantom across 10 simulated clinical scenarios.

Results: The proposed pipeline achieved 10 out of 10 successful needle placements on the first attempt. Vessels were reconstructed with a mean error of 2.15 mm, and autonomous needle insertion was performed with an error less than or close to 1 mm.

Conclusion: To our knowledge, this is the first robotic CVC system demonstrated on a high-fidelity phantom with integrated planning, scanning, and insertion. Experimental results show its potential for clinical translation.

目的:中心静脉置管(CVC)是血管通路、血流动力学监测和救生干预的关键医疗程序。由于需要连续超声引导的目标血管和接近针头的可视化,解剖变异性和操作员依赖性进一步复杂化,因此其成功仍然具有挑战性。针头放置错误会导致危及生命的并发症。虽然机器人系统提供了一个潜在的解决方案,但实现完全自主仍然具有挑战性。在这项工作中,我们提出了一个端到端的机器人超声引导CVC管道,从扫描初始化到针头插入。方法:我们引入了一个深度学习模型,从使用RGB-D相机获得的患者颈部深度图像中识别临床相关的解剖标志,以自主定义扫描区域和路径。然后,提出了机器人运动规划框架,对血管(静脉和动脉)进行扫描、分割、重构和定位,并确定最佳插入区域。最后,针导模块在超声引导下根据操作者的反馈来规划针的插入。该管道在10个模拟临床场景的高保真商业模型上进行了验证。结果:所提出的管道在第一次尝试中10次成功放置针头。血管重建的平均误差为2.15 mm,自动针头插入的误差小于或接近1 mm。结论:据我们所知,这是第一个在高保真假体上展示的集成了规划、扫描和插入的机器人CVC系统。实验结果表明其具有临床转化的潜力。
{"title":"AURA-CVC: Autonomous Ultrasound-guided Robotic Assistance for Central Venous Catheterization.","authors":"Deepak Raina, Lidia Al-Zogbi, Brian Teixeira, Vivek Singh, Ankur Kapoor, Thorsten Fleiter, Muyinatu A Lediju Bell, Vinciya Pandian, Axel Krieger","doi":"10.1007/s11548-026-03572-9","DOIUrl":"https://doi.org/10.1007/s11548-026-03572-9","url":null,"abstract":"<p><strong>Purpose: </strong>Central venous catheterization (CVC) is a critical medical procedure for vascular access, hemodynamic monitoring, and life-saving interventions. Its success remains challenging due to the need for continuous ultrasound-guided visualization of a target vessel and approaching needle, which is further complicated by anatomical variability and operator dependency. Errors in needle placement can lead to life-threatening complications. While robotic systems offer a potential solution, achieving full autonomy remains challenging. In this work, we propose an end-to-end robotic ultrasound-guided CVC pipeline, from scan initialization to needle insertion.</p><p><strong>Methods: </strong>We introduce a deep-learning model to identify clinically relevant anatomical landmarks from a depth image of the patient's neck, obtained using an RGB-D camera, to autonomously define the scanning region and paths. Then, a robot motion planning framework is proposed to scan, segment, reconstruct, and localize vessels (veins and arteries), followed by the identification of the optimal insertion zone. Finally, a needle guidance module plans the insertion under ultrasound guidance with operator's feedback. This pipeline was validated on a high-fidelity commercial phantom across 10 simulated clinical scenarios.</p><p><strong>Results: </strong>The proposed pipeline achieved 10 out of 10 successful needle placements on the first attempt. Vessels were reconstructed with a mean error of 2.15 mm, and autonomous needle insertion was performed with an error less than or close to 1 mm.</p><p><strong>Conclusion: </strong>To our knowledge, this is the first robotic CVC system demonstrated on a high-fidelity phantom with integrated planning, scanning, and insertion. Experimental results show its potential for clinical translation.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2026-02-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147285971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Standardizing ACL tunnel placement: an automated method for knee quadrant computation. 标准化ACL隧道放置:膝关节象限计算的自动化方法。
IF 2.3 3区 医学 Q3 ENGINEERING, BIOMEDICAL Pub Date : 2026-02-25 DOI: 10.1007/s11548-026-03578-3
Yufan Wang, Zhengliang Li, Yangyang Yang, Yinghui Hua, Tsung-Yuan Tsai

Purpose: Anatomical tunnel placement in the anterior cruciate ligament (ACL) reconstruction is essential for better functional recovery and reduced complications. However, subjective factors and ambiguous definitions reduce the accuracy and efficiency of quantifying ACL position on 3D models. This study aims to develop and validate a fully automated framework for standardized 3D quadrant coordinate computation of femoral and tibial ACL footprints, enabling objective preoperative planning and postoperative evaluation.

Methods: An nnUNet-based foundation network was fine-tuned to reconstruct distal femur and proximal tibia 3D models from CT or MRI data. Automated template registration and morphological analysis were used to determine anatomical planes and generate individualized quadrant coordinate systems. The pipeline was validated on both CT and MRI datasets, comparing location accuracy, calculation repeatability, and time efficiency against manual methods.

Results: The 3D distances between the actual and automatic predicted centroids were 1.72 ± 0.94 mm and 1.47 ± 1.06 mm for femur and tibia, respectively, while the errors of manual method were 1.89 ± 1.42 mm and 2.11 ± 1.27 mm. The method achieved an average repeatability of 0.992 in quadrant calculations with different initializations, while the ICCs of two manual annotation were 0.961 (A), 0.946 (B), and 0.882 (A&B). The processing time for generating quadrant coordinate systems was significantly reduced to an average of 4.7 ± 1.3 s, compared to 8.5 ± 2.1 min for manual annotation.

Conclusion: This study presented the first fully automated, modality-independent method for 3D quadrant coordinate computation in knee surgery. The proposed framework delivers robust and standardized ACL anatomical locations across both CT and MRI data, enhancing the clinical efficiency of the preoperative planning and postoperative assessment of ACL reconstruction.

目的:解剖隧道置放在前交叉韧带(ACL)重建中对于更好的功能恢复和减少并发症是必不可少的。然而,主观因素和模糊定义降低了在三维模型上量化ACL位置的准确性和效率。本研究旨在开发和验证一个全自动框架,用于股骨和胫骨前交叉韧带脚印的标准化三维象限坐标计算,从而实现客观的术前规划和术后评估。方法:对基于nnunet的基础网络进行微调,根据CT或MRI数据重建股骨远端和胫骨近端3D模型。采用自动模板配准和形态学分析确定解剖平面,生成个性化象限坐标系。在CT和MRI数据集上验证了该管道,比较了定位精度、计算重复性和时间效率。结果:股骨和胫骨的实际质心与自动预测质心的三维距离分别为1.72±0.94 mm和1.47±1.06 mm,手工方法的误差分别为1.89±1.42 mm和2.11±1.27 mm。该方法在不同初始化象限计算中的平均重复性为0.992,而两种手动注释的ICCs分别为0.961 (A)、0.946 (B)和0.882 (A和B)。生成象限坐标系统的处理时间显着减少到平均4.7±1.3秒,而手动注释的处理时间为8.5±2.1分钟。结论:本研究提出了第一个完全自动化的、模态无关的膝关节手术三维象限坐标计算方法。该框架在CT和MRI数据中提供了稳健和标准化的ACL解剖位置,提高了ACL重建术前规划和术后评估的临床效率。
{"title":"Standardizing ACL tunnel placement: an automated method for knee quadrant computation.","authors":"Yufan Wang, Zhengliang Li, Yangyang Yang, Yinghui Hua, Tsung-Yuan Tsai","doi":"10.1007/s11548-026-03578-3","DOIUrl":"https://doi.org/10.1007/s11548-026-03578-3","url":null,"abstract":"<p><strong>Purpose: </strong>Anatomical tunnel placement in the anterior cruciate ligament (ACL) reconstruction is essential for better functional recovery and reduced complications. However, subjective factors and ambiguous definitions reduce the accuracy and efficiency of quantifying ACL position on 3D models. This study aims to develop and validate a fully automated framework for standardized 3D quadrant coordinate computation of femoral and tibial ACL footprints, enabling objective preoperative planning and postoperative evaluation.</p><p><strong>Methods: </strong>An nnUNet-based foundation network was fine-tuned to reconstruct distal femur and proximal tibia 3D models from CT or MRI data. Automated template registration and morphological analysis were used to determine anatomical planes and generate individualized quadrant coordinate systems. The pipeline was validated on both CT and MRI datasets, comparing location accuracy, calculation repeatability, and time efficiency against manual methods.</p><p><strong>Results: </strong>The 3D distances between the actual and automatic predicted centroids were 1.72 ± 0.94 mm and 1.47 ± 1.06 mm for femur and tibia, respectively, while the errors of manual method were 1.89 ± 1.42 mm and 2.11 ± 1.27 mm. The method achieved an average repeatability of 0.992 in quadrant calculations with different initializations, while the ICCs of two manual annotation were 0.961 (A), 0.946 (B), and 0.882 (A&B). The processing time for generating quadrant coordinate systems was significantly reduced to an average of 4.7 ± 1.3 s, compared to 8.5 ± 2.1 min for manual annotation.</p><p><strong>Conclusion: </strong>This study presented the first fully automated, modality-independent method for 3D quadrant coordinate computation in knee surgery. The proposed framework delivers robust and standardized ACL anatomical locations across both CT and MRI data, enhancing the clinical efficiency of the preoperative planning and postoperative assessment of ACL reconstruction.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2026-02-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147285960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Navigated hepatic tumor resection using intraoperative ultrasound imaging. 术中超声成像导航肝肿瘤切除术。
IF 2.3 3区 医学 Q3 ENGINEERING, BIOMEDICAL Pub Date : 2026-02-23 DOI: 10.1007/s11548-026-03581-8
Karin A Olthof, Theo J M Ruers, Tiziano Natali, Lisanne P J Venix, Jasper N Smit, Anne G den Hartog, Niels F M Kok, Matteo Fusaglia, Koert F D Kuhlmann

Purpose: This proof-of-concept study evaluates the feasibility and accuracy of an ultrasound-based navigation system for open liver surgery. Unlike most conventional systems that rely on registration to preoperative imaging, the proposed system provides navigation-guided resection using 3D models generated from intraoperative ultrasound.

Methods: A pilot study was conducted in 25 patients undergoing resection of liver metastases. The first 5 cases served to optimize the workflow. Intraoperatively, an electromagnetic sensor compensated for organ motion, after which an ultrasound volume was acquired. Vasculature was segmented automatically and tumors semi-automatically using region-growing (n = 15) or a deep learning algorithm (n = 5). The resulting 3D model was visualized alongside tracked surgical instruments. Accuracy was assessed by comparing the distance between surgical clips and tumors in the navigation software with the same distance on a postoperative CT of the resected specimen.

Results: Navigation was successfully established in all 20 patients. However, 4 cases were excluded from the accuracy assessment due to intraoperative sensor detachment (n = 3) or incorrect data recording (n = 1). The complete navigation workflow was operational within 5-10 min. In 16 evaluable patients, 78 clip-to-tumor distances were analyzed. The median navigation accuracy was 3.2 mm [IQR: 2.8-4.8 mm], and an R0 resection was achieved in 15/16 (93.8%) patients, and one patient had an R1 vascular resection.

Conclusion: Navigation based solely on intraoperative ultrasound is feasible and accurate for liver surgery. This approach paves the way for simpler and more accurate image guidance systems.

目的:这一概念验证研究评估了一种基于超声的导航系统用于开放肝脏手术的可行性和准确性。与大多数依赖于术前成像的传统系统不同,该系统使用术中超声生成的3D模型提供导航引导切除。方法:对25例肝转移灶切除术患者进行初步研究。前5个案例用于优化工作流程。术中,电磁传感器补偿器官运动,之后获得超声体积。使用区域生长(n = 15)或深度学习算法(n = 5)自动分割脉管系统和半自动分割肿瘤。生成的3D模型与跟踪的手术器械一起可视化。通过比较导航软件中手术夹与肿瘤之间的距离与术后切除标本CT上相同距离来评估准确性。结果:20例患者均成功建立导航。然而,4例因术中传感器脱离(n = 3)或数据记录不正确(n = 1)而被排除在准确性评估之外。完整的导航流程在5-10分钟内即可运行。在16例可评估的患者中,分析了78个夹到肿瘤的距离。中位导航精度为3.2 mm [IQR: 2.8-4.8 mm], 15/16例(93.8%)患者实现R0切除,1例患者R1血管切除。结论:单纯基于术中超声的肝脏手术导航是可行且准确的。这种方法为更简单、更精确的图像制导系统铺平了道路。
{"title":"Navigated hepatic tumor resection using intraoperative ultrasound imaging.","authors":"Karin A Olthof, Theo J M Ruers, Tiziano Natali, Lisanne P J Venix, Jasper N Smit, Anne G den Hartog, Niels F M Kok, Matteo Fusaglia, Koert F D Kuhlmann","doi":"10.1007/s11548-026-03581-8","DOIUrl":"https://doi.org/10.1007/s11548-026-03581-8","url":null,"abstract":"<p><strong>Purpose: </strong>This proof-of-concept study evaluates the feasibility and accuracy of an ultrasound-based navigation system for open liver surgery. Unlike most conventional systems that rely on registration to preoperative imaging, the proposed system provides navigation-guided resection using 3D models generated from intraoperative ultrasound.</p><p><strong>Methods: </strong>A pilot study was conducted in 25 patients undergoing resection of liver metastases. The first 5 cases served to optimize the workflow. Intraoperatively, an electromagnetic sensor compensated for organ motion, after which an ultrasound volume was acquired. Vasculature was segmented automatically and tumors semi-automatically using region-growing (n = 15) or a deep learning algorithm (n = 5). The resulting 3D model was visualized alongside tracked surgical instruments. Accuracy was assessed by comparing the distance between surgical clips and tumors in the navigation software with the same distance on a postoperative CT of the resected specimen.</p><p><strong>Results: </strong>Navigation was successfully established in all 20 patients. However, 4 cases were excluded from the accuracy assessment due to intraoperative sensor detachment (n = 3) or incorrect data recording (n = 1). The complete navigation workflow was operational within 5-10 min. In 16 evaluable patients, 78 clip-to-tumor distances were analyzed. The median navigation accuracy was 3.2 mm [IQR: 2.8-4.8 mm], and an R0 resection was achieved in 15/16 (93.8%) patients, and one patient had an R1 vascular resection.</p><p><strong>Conclusion: </strong>Navigation based solely on intraoperative ultrasound is feasible and accurate for liver surgery. This approach paves the way for simpler and more accurate image guidance systems.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2026-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147272767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CVS assessment via distillation-based self-supervised and multiple instance learning in laparoscopic cholecystectomy. 基于蒸馏的自我监督和多实例学习在腹腔镜胆囊切除术中的CVS评估。
IF 2.3 3区 医学 Q3 ENGINEERING, BIOMEDICAL Pub Date : 2026-02-18 DOI: 10.1007/s11548-026-03580-9
Hao Wang, Yutao Zhang, Yuxuan Yang, Yuanbo Zhu, Rui Xu

Purpose: Accurate and automated assessment of the critical view of safety (CVS) is crucial for preventing bile duct injuries during laparoscopic cholecystectomy (LC). Existing methods often rely on costly segmentation labels or sequential inputs, limiting generalization and spatiotemporal understanding. This study proposes an efficient framework that removes the need for segmentation annotations while enhancing model robustness and temporal-spatial comprehension.

Methods: We introduce SMIL framework, a novel framework for automated CVS assessment that combines distillation-based self-supervised pretraining and multiple instance learning. A video transformer is first pretrained using label-free self-distillation to capture rich spatiotemporal features. We conducted a benchmark evaluation on the public Endoscapes2023 dataset, comprising 201 LC videos whose CVS-relevant frames are released at 1 fps (58,813 frames in total). Training/validation/testing followed the official video-level split of 120/41/40 videos. It is then fine-tuned via MIL by fusing global and local representations for multi-label CVS classification.

Results: Experimental results on the official test partition show that SMIL framework outperforms state-of-the-art methods without relying on segmentation labels. Compared to the strongest label-free baseline, SMIL achieves gains of 3.21% in mean average precision and 2.74% in balanced accuracy, setting a new benchmark for automated CVS assessment without dense annotations. Notably, SMIL also surpasses segmentation-supervised models in mAP, further highlighting its efficient learning capability.

Conclusion: The SMIL framework enables automated CVS assessment without segmentation annotations or sequential inputs. By combining self-supervised and multiple instance learning, it enhances spatiotemporal understanding and generalization in LC surgeries, offering both theoretical insights and practical value for surgical safety.

目的:在腹腔镜胆囊切除术(LC)中,准确、自动地评估安全关键视点(CVS)对预防胆管损伤至关重要。现有的方法通常依赖于昂贵的分割标签或顺序输入,限制了泛化和时空理解。本研究提出了一个有效的框架,消除了分割注释的需要,同时增强了模型的鲁棒性和时空理解。方法:我们引入了SMIL框架,这是一种结合了基于蒸馏的自监督预训练和多实例学习的自动化CVS评估框架。首先使用无标签自蒸馏对视频转换器进行预训练,以捕获丰富的时空特征。我们对公开的Endoscapes2023数据集进行了基准评估,该数据集包括201个LC视频,这些视频的cvs相关帧以1fps的速度发布(总共58,813帧)。培训/验证/测试遵循120/41/40视频的官方视频级别划分。然后,通过融合多标签CVS分类的全局和局部表示,通过MIL对其进行微调。结果:在官方测试分区上的实验结果表明,SMIL框架在不依赖分割标签的情况下优于最先进的方法。与最强的无标签基线相比,SMIL的平均精度提高了3.21%,平衡精度提高了2.74%,为没有密集注释的自动化CVS评估设定了新的基准。值得注意的是,SMIL在mAP中也超过了分割监督模型,进一步突出了其高效的学习能力。结论:SMIL框架可以在没有分割注释或顺序输入的情况下实现自动CVS评估。通过将自监督和多实例学习相结合,增强了LC手术的时空理解和泛化,为手术安全提供了理论见解和实践价值。
{"title":"CVS assessment via distillation-based self-supervised and multiple instance learning in laparoscopic cholecystectomy.","authors":"Hao Wang, Yutao Zhang, Yuxuan Yang, Yuanbo Zhu, Rui Xu","doi":"10.1007/s11548-026-03580-9","DOIUrl":"https://doi.org/10.1007/s11548-026-03580-9","url":null,"abstract":"<p><strong>Purpose: </strong>Accurate and automated assessment of the critical view of safety (CVS) is crucial for preventing bile duct injuries during laparoscopic cholecystectomy (LC). Existing methods often rely on costly segmentation labels or sequential inputs, limiting generalization and spatiotemporal understanding. This study proposes an efficient framework that removes the need for segmentation annotations while enhancing model robustness and temporal-spatial comprehension.</p><p><strong>Methods: </strong>We introduce SMIL framework, a novel framework for automated CVS assessment that combines distillation-based self-supervised pretraining and multiple instance learning. A video transformer is first pretrained using label-free self-distillation to capture rich spatiotemporal features. We conducted a benchmark evaluation on the public Endoscapes2023 dataset, comprising 201 LC videos whose CVS-relevant frames are released at 1 fps (58,813 frames in total). Training/validation/testing followed the official video-level split of 120/41/40 videos. It is then fine-tuned via MIL by fusing global and local representations for multi-label CVS classification.</p><p><strong>Results: </strong>Experimental results on the official test partition show that SMIL framework outperforms state-of-the-art methods without relying on segmentation labels. Compared to the strongest label-free baseline, SMIL achieves gains of 3.21% in mean average precision and 2.74% in balanced accuracy, setting a new benchmark for automated CVS assessment without dense annotations. Notably, SMIL also surpasses segmentation-supervised models in mAP, further highlighting its efficient learning capability.</p><p><strong>Conclusion: </strong>The SMIL framework enables automated CVS assessment without segmentation annotations or sequential inputs. By combining self-supervised and multiple instance learning, it enhances spatiotemporal understanding and generalization in LC surgeries, offering both theoretical insights and practical value for surgical safety.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2026-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146221831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Development of a wire-driven film-type device for tissue elimination in rectal cancer endoscopic surgery. 一种用于直肠癌内窥镜手术组织消除的线驱动膜式装置的研制。
IF 2.3 3区 医学 Q3 ENGINEERING, BIOMEDICAL Pub Date : 2026-02-16 DOI: 10.1007/s11548-026-03570-x
Masaaki Kuruma, Ryoto Fukunaka, Hiro Hasegawa, Masaaki Ito, Satoshi Konishi

Purpose: In rectal cancer endoscopic surgery, the tissue hangs down and obstructs the surgical field when using a scalpel to make an incision to expose the tumor. Thus, there is a need for a tissue elimination device because the surgical field can be secured by lifting the hanging tissue during this surgery.

Methods: We developed a wire-driven film-type device for tissue elimination that can be driven even in limited spaces such as the spatial constraints in the rectum. The device is composed of a polyethylene terephthalate film and stainless steel with a belt-loop structure to reduce film swelling. The belt-loop prevents film swelling and facilitates device movement without restriction of the obstacles above it. In addition, the device generates a force exceeding 1 N at a displacement of 0-10 mm. We used a mechanical model to analyze the relationship between the force at the device tip and the tensile force acting at the belt-loop position. This analysis facilitated the optimization of the belt-loop position.

Results: A maximum force of 4.25 N was achieved at a tensile force of 40 N under the specified displacement conditions. Further, the device with the optimized belt-loop position achieved a maximum force of 10.3 N at a tensile force of 40 N, which was approximately 2.4 times higher than that of the device before optimization.

Conclusion: The wire-driven film-type device for tissue elimination can optimize the belt-loop position to satisfy the required specifications. Furthermore, evaluation results of the devices indicate it possesses sufficient performance for use in rectal cancer endoscopic surgery.

目的:在直肠癌内窥镜手术中,当使用手术刀切开暴露肿瘤时,组织下垂并阻塞手术视野。因此,需要组织消除装置,因为手术过程中可以通过抬起悬挂组织来保护手术野。方法:我们研制了一种钢丝驱动的膜式组织消除装置,即使在有限的空间,如直肠的空间限制,也可以驱动。该装置由聚对苯二甲酸乙二醇酯薄膜和不锈钢组成,采用带环结构,以减少薄膜膨胀。皮带环防止薄膜膨胀,方便设备移动,不受上面障碍物的限制。在0 ~ 10mm的位移处,设备会产生大于1n的力。我们使用力学模型分析了装置尖端的力与作用在皮带环位置的拉力之间的关系。该分析有助于优化皮带环的位置。结果:在规定的位移条件下,拉力为40 N,最大拉力为4.25 N。此外,优化后的带环位置器件在40 N的拉伸力下获得了10.3 N的最大力,比优化前的器件提高了约2.4倍。结论:钢丝驱动膜式组织消除装置能优化皮带环的位置,满足要求。此外,该装置的评估结果表明,它具有足够的性能用于直肠癌内窥镜手术。
{"title":"Development of a wire-driven film-type device for tissue elimination in rectal cancer endoscopic surgery.","authors":"Masaaki Kuruma, Ryoto Fukunaka, Hiro Hasegawa, Masaaki Ito, Satoshi Konishi","doi":"10.1007/s11548-026-03570-x","DOIUrl":"https://doi.org/10.1007/s11548-026-03570-x","url":null,"abstract":"<p><strong>Purpose: </strong>In rectal cancer endoscopic surgery, the tissue hangs down and obstructs the surgical field when using a scalpel to make an incision to expose the tumor. Thus, there is a need for a tissue elimination device because the surgical field can be secured by lifting the hanging tissue during this surgery.</p><p><strong>Methods: </strong>We developed a wire-driven film-type device for tissue elimination that can be driven even in limited spaces such as the spatial constraints in the rectum. The device is composed of a polyethylene terephthalate film and stainless steel with a belt-loop structure to reduce film swelling. The belt-loop prevents film swelling and facilitates device movement without restriction of the obstacles above it. In addition, the device generates a force exceeding 1 N at a displacement of 0-10 mm. We used a mechanical model to analyze the relationship between the force at the device tip and the tensile force acting at the belt-loop position. This analysis facilitated the optimization of the belt-loop position.</p><p><strong>Results: </strong>A maximum force of 4.25 N was achieved at a tensile force of 40 N under the specified displacement conditions. Further, the device with the optimized belt-loop position achieved a maximum force of 10.3 N at a tensile force of 40 N, which was approximately 2.4 times higher than that of the device before optimization.</p><p><strong>Conclusion: </strong>The wire-driven film-type device for tissue elimination can optimize the belt-loop position to satisfy the required specifications. Furthermore, evaluation results of the devices indicate it possesses sufficient performance for use in rectal cancer endoscopic surgery.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2026-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146203739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A hybrid self-supervised teacher-student model for predicting neurovascular bundle preservation in prostatectomy videos. 预测前列腺切除术视频中神经血管束保存的混合自我监督师生模型。
IF 2.3 3区 医学 Q3 ENGINEERING, BIOMEDICAL Pub Date : 2026-02-12 DOI: 10.1007/s11548-025-03544-5
Diego Andrés Del Aguila Moraga, Huitao Wang, Satoshi Ando, Hayato Hoshina, Hiroshi Kawahira, Yukihiro Nomura, Toshiya Nakaguchi

Purpose: Preserving neurovascular bundles (NVB) during robot-assisted radical prostatectomy (RARP) is vital for reducing postoperative complications such as urinary incontinence and erectile dysfunction. Building on our previous work in ensemble-based NVB classification, we propose the hybrid self-supervised teacher-student model (Hybrid T-S model) that leverages multi-task learning to predict NVB preservation in prostatectomy videos.

Methods: Our approach integrates a self-supervised framework (DINO) as an online self-distillation objective on multi-crop views to learn robust embeddings in a limited data setting, rather than as a stand-alone large-scale pretraining. A teacher encoder, which is an exponential moving average (EMA) of the student encoder, and a reconstruction decoder are trained jointly with a classification head in a single end-to-end framework. This model is evaluated on single frames from patients who underwent RARP surgery.

Results: Our experimental evaluation shows that the Hybrid T-S model outperforms previous NVB classification methods. This highlights the benefits of integrating self-supervised learning and multi-task objectives in this surgical context. We achieved an average accuracy of 86.55%, precision of 83.93%, recall of 90.73%, F1-score of 87%, and AUROC of 88.35%, based on fivefold cross-validation.

Conclusion: Incorporating representation learning through self-distillation, classification, and reconstruction provides complementary signals that enhance the prediction of NVB preservation. Our Hybrid T-S model can assist surgeons in real decision-making and improve patient recovery.

目的:在机器人辅助根治性前列腺切除术(RARP)中保留神经血管束(NVB)对于减少尿失禁和勃起功能障碍等术后并发症至关重要。在我们之前基于集合的NVB分类工作的基础上,我们提出了混合自监督师生模型(hybrid T-S模型),该模型利用多任务学习来预测前列腺切除术视频中的NVB保存。方法:我们的方法将自监督框架(DINO)集成为多作物视图的在线自蒸馏目标,以在有限的数据设置中学习鲁棒嵌入,而不是作为独立的大规模预训练。教师编码器是学生编码器的指数移动平均(EMA),重构编码器与分类头在单个端到端框架中联合训练。该模型在接受RARP手术患者的单帧图像上进行评估。结果:我们的实验评估表明,Hybrid T-S模型优于以前的NVB分类方法。这突出了在这种手术环境中整合自我监督学习和多任务目标的好处。经五重交叉验证,平均准确率为86.55%,精密度为83.93%,召回率为90.73%,f1评分为87%,AUROC为88.35%。结论:通过自蒸馏、分类和重建结合表征学习提供了互补的信号,增强了NVB保存的预测。我们的混合T-S模型可以帮助外科医生做出真正的决策,并改善患者的康复。
{"title":"A hybrid self-supervised teacher-student model for predicting neurovascular bundle preservation in prostatectomy videos.","authors":"Diego Andrés Del Aguila Moraga, Huitao Wang, Satoshi Ando, Hayato Hoshina, Hiroshi Kawahira, Yukihiro Nomura, Toshiya Nakaguchi","doi":"10.1007/s11548-025-03544-5","DOIUrl":"https://doi.org/10.1007/s11548-025-03544-5","url":null,"abstract":"<p><strong>Purpose: </strong>Preserving neurovascular bundles (NVB) during robot-assisted radical prostatectomy (RARP) is vital for reducing postoperative complications such as urinary incontinence and erectile dysfunction. Building on our previous work in ensemble-based NVB classification, we propose the hybrid self-supervised teacher-student model (Hybrid T-S model) that leverages multi-task learning to predict NVB preservation in prostatectomy videos.</p><p><strong>Methods: </strong>Our approach integrates a self-supervised framework (DINO) as an online self-distillation objective on multi-crop views to learn robust embeddings in a limited data setting, rather than as a stand-alone large-scale pretraining. A teacher encoder, which is an exponential moving average (EMA) of the student encoder, and a reconstruction decoder are trained jointly with a classification head in a single end-to-end framework. This model is evaluated on single frames from patients who underwent RARP surgery.</p><p><strong>Results: </strong>Our experimental evaluation shows that the Hybrid T-S model outperforms previous NVB classification methods. This highlights the benefits of integrating self-supervised learning and multi-task objectives in this surgical context. We achieved an average accuracy of 86.55%, precision of 83.93%, recall of 90.73%, F1-score of 87%, and AUROC of 88.35%, based on fivefold cross-validation.</p><p><strong>Conclusion: </strong>Incorporating representation learning through self-distillation, classification, and reconstruction provides complementary signals that enhance the prediction of NVB preservation. Our Hybrid T-S model can assist surgeons in real decision-making and improve patient recovery.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2026-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146167980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ProstNFound+: A Prospective Study using Medical Foundation Models for Prostate Cancer Detection. ProstNFound+:一项使用医学基础模型检测前列腺癌的前瞻性研究。
IF 2.3 3区 医学 Q3 ENGINEERING, BIOMEDICAL Pub Date : 2026-02-12 DOI: 10.1007/s11548-025-03561-4
Paul F R Wilson, Mohamed Harmanani, Minh Nguyen Nhat To, Amoon Jamzad, Tarek Elghareb, Zhuoxin Guo, Adam Kinnaird, Brian Wodlinger, Purang Abolmaesumi, Parvin Mousavi

Purpose: Medical foundation models (FMs) offer a path to build high-performance diagnostic systems. However, their application to prostate cancer (PCa) detection from micro-ultrasound ( μ US) remains untested in clinical settings. We present ProstNFound+, an adaptation of FMs for PCa detection from μ US, along with its first prospective validation.

Methods: ProstNFound+ incorporates a medical FM, adapter tuning, and a custom prompt encoder that embeds PCa-specific clinical biomarkers. The model generates a cancer heatmap and a risk score for clinically significant PCa. Following training on multicenter retrospective data, the model is prospectively evaluated on data acquired five years later from a new clinical site. Model predictions are benchmarked against standard clinical scoring protocols (PRI-MUS and PI-RADS).

Results: ProstNFound+ shows strong generalization to the prospective data, with no performance degradation compared to retrospective evaluation. It aligns closely with clinical scores and produces interpretable heatmaps consistent with biopsy-confirmed lesions.

Conclusion: The results highlight its potential for clinical deployment, offering a scalable and interpretable alternative to expert-driven protocols.

目的:医学基础模型(FMs)为构建高性能诊断系统提供了一条途径。然而,它们在微超声(μ US)检测前列腺癌(PCa)中的应用仍未在临床环境中进行测试。我们提出了ProstNFound+,这是一种适用于μ US中PCa检测的FMs,并进行了首次前瞻性验证。方法:ProstNFound+结合了一个医疗调频、适配器调谐和一个嵌入pca特异性临床生物标志物的自定义提示编码器。该模型生成癌症热图和临床显著PCa的风险评分。在对多中心回顾性数据进行培训后,该模型将根据五年后从一个新的临床地点获得的数据进行前瞻性评估。模型预测以标准临床评分方案(PRI-MUS和PI-RADS)为基准。结果:ProstNFound+对前瞻性数据具有较强的泛化能力,与回顾性评价相比没有性能下降。它与临床评分密切相关,并产生与活检确认病变一致的可解释热图。结论:结果突出了其临床应用的潜力,为专家驱动的协议提供了可扩展和可解释的替代方案。
{"title":"ProstNFound+: A Prospective Study using Medical Foundation Models for Prostate Cancer Detection.","authors":"Paul F R Wilson, Mohamed Harmanani, Minh Nguyen Nhat To, Amoon Jamzad, Tarek Elghareb, Zhuoxin Guo, Adam Kinnaird, Brian Wodlinger, Purang Abolmaesumi, Parvin Mousavi","doi":"10.1007/s11548-025-03561-4","DOIUrl":"https://doi.org/10.1007/s11548-025-03561-4","url":null,"abstract":"<p><strong>Purpose: </strong>Medical foundation models (FMs) offer a path to build high-performance diagnostic systems. However, their application to prostate cancer (PCa) detection from micro-ultrasound ( <math><mi>μ</mi></math> US) remains untested in clinical settings. We present ProstNFound+, an adaptation of FMs for PCa detection from <math><mi>μ</mi></math> US, along with its first prospective validation.</p><p><strong>Methods: </strong>ProstNFound+ incorporates a medical FM, adapter tuning, and a custom prompt encoder that embeds PCa-specific clinical biomarkers. The model generates a cancer heatmap and a risk score for clinically significant PCa. Following training on multicenter retrospective data, the model is prospectively evaluated on data acquired five years later from a new clinical site. Model predictions are benchmarked against standard clinical scoring protocols (PRI-MUS and PI-RADS).</p><p><strong>Results: </strong>ProstNFound+ shows strong generalization to the prospective data, with no performance degradation compared to retrospective evaluation. It aligns closely with clinical scores and produces interpretable heatmaps consistent with biopsy-confirmed lesions.</p><p><strong>Conclusion: </strong>The results highlight its potential for clinical deployment, offering a scalable and interpretable alternative to expert-driven protocols.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2026-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146168025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
BrightVAE: luminosity enhancement in underexposed endoscopic images. BrightVAE:曝光不足的内窥镜图像的亮度增强。
IF 2.3 3区 医学 Q3 ENGINEERING, BIOMEDICAL Pub Date : 2026-02-02 DOI: 10.1007/s11548-026-03573-8
Farzaneh Koohestani, Zahra Nabizadeh, Nader Karimi, Shahram Shirani, Shadrokh Samavi

Purpose: Low-light endoscopic images often lack contrast and clarity, obscuring anatomical details and reducing diagnostic accuracy. This study develops a method to enhance image brightness and visibility, enabling clearer visualization of critical structures to support precise medical diagnoses and improve patient outcomes.

Methods: To specifically address nonuniform illumination, we propose BrightVAE, a model that uses a dual-receptive-field architecture to decouple global brightness correction from local texture preservation. Integrated attention-based modules (Attencoder and Attenquant) explicitly target and amplify underexposed regions while preventing over-saturation, thereby recovering human-evaluable details in shadowed areas. The model was trained and tested on a public endoscopic dataset, and its performance was evaluated against other techniques using quality metrics.

Results: The model outperformed alternatives, improving PSNR by 3.252 units, structural detail by 0.045, and perceptual quality by 0.014 compared to the best model before us, achieving a PSNR of 30.576, SSIM of 0.879, and LPIPS of 0.133, ensuring superior visibility of shadowed regions.

Conclusion: This approach advances endoscopic imaging by delivering sharper, reliable images, enhancing diagnostic precision in clinical practice. Improved visualization supports better detection of abnormalities, potentially leading to more effective treatment decisions and enhanced patient care.

目的:低光内镜图像往往缺乏对比度和清晰度,模糊解剖细节,降低诊断准确性。本研究开发了一种增强图像亮度和可见度的方法,使关键结构的可视化更加清晰,从而支持精确的医疗诊断并改善患者的预后。方法:为了解决非均匀照明问题,我们提出了BrightVAE模型,该模型使用双接受场架构将全局亮度校正与局部纹理保存分离开来。集成的基于注意力的模块(Attencoder和Attenquant)明确瞄准和放大曝光不足的区域,同时防止过度饱和,从而在阴影区域恢复人类可评估的细节。该模型在一个公共内窥镜数据集上进行了训练和测试,并使用质量指标对其性能与其他技术进行了评估。结果:与之前的最佳模型相比,该模型的PSNR提高了3.252个单位,结构细节提高了0.045个单位,感知质量提高了0.014个单位,实现了PSNR为30.576,SSIM为0.879,LPIPS为0.133,确保了阴影区域的卓越可见性。结论:该方法通过提供更清晰、可靠的图像,提高了临床诊断的准确性,从而促进了内窥镜成像。改进的可视化支持更好地检测异常,可能导致更有效的治疗决策和增强患者护理。
{"title":"BrightVAE: luminosity enhancement in underexposed endoscopic images.","authors":"Farzaneh Koohestani, Zahra Nabizadeh, Nader Karimi, Shahram Shirani, Shadrokh Samavi","doi":"10.1007/s11548-026-03573-8","DOIUrl":"https://doi.org/10.1007/s11548-026-03573-8","url":null,"abstract":"<p><strong>Purpose: </strong>Low-light endoscopic images often lack contrast and clarity, obscuring anatomical details and reducing diagnostic accuracy. This study develops a method to enhance image brightness and visibility, enabling clearer visualization of critical structures to support precise medical diagnoses and improve patient outcomes.</p><p><strong>Methods: </strong>To specifically address nonuniform illumination, we propose BrightVAE, a model that uses a dual-receptive-field architecture to decouple global brightness correction from local texture preservation. Integrated attention-based modules (Attencoder and Attenquant) explicitly target and amplify underexposed regions while preventing over-saturation, thereby recovering human-evaluable details in shadowed areas. The model was trained and tested on a public endoscopic dataset, and its performance was evaluated against other techniques using quality metrics.</p><p><strong>Results: </strong>The model outperformed alternatives, improving PSNR by 3.252 units, structural detail by 0.045, and perceptual quality by 0.014 compared to the best model before us, achieving a PSNR of 30.576, SSIM of 0.879, and LPIPS of 0.133, ensuring superior visibility of shadowed regions.</p><p><strong>Conclusion: </strong>This approach advances endoscopic imaging by delivering sharper, reliable images, enhancing diagnostic precision in clinical practice. Improved visualization supports better detection of abnormalities, potentially leading to more effective treatment decisions and enhanced patient care.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2026-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146108457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Lexomics, or why to extract relevant information from radiology reports through LLMs. Lexomics,或者为什么要通过法学硕士从放射学报告中提取相关信息。
IF 2.3 3区 医学 Q3 ENGINEERING, BIOMEDICAL Pub Date : 2026-02-01 Epub Date: 2025-09-22 DOI: 10.1007/s11548-025-03521-y
Teodoro Martín-Noguerol, Pilar López-Úbeda, Carolina Díaz-Angulo, Antonio Luna

Purpose: The application of large language models (LLMs) to radiology reports aims to enhance the extraction of meaningful textual data, improving clinical decision-making and patient management. Similar to radiomics in image analysis, lexomics seeks to reveal hidden patterns in radiology reports to support diagnosis, classification, and structured reporting.

Methods: LLMs and natural language processing (NLP) algorithms analyze radiology reports to extract relevant information, refine differential diagnoses, and integrate clinical data. These models process structured and unstructured text, identifying patterns and correlations that may otherwise go unnoticed. Applications include automated structured reporting, quality control, and enhanced communication of incidental and urgent findings.

Results: LLMs have demonstrated the ability to assist radiologists in real-time, standardizing classifications, improving report clarity, and enhancing the integration of radiology reports into electronic health records (EHRs). They support radiologists by reducing redundancies, structuring free-text reports, and detecting clinically relevant insights. Unlike radiomics, lexomics requires minimal computational power, making it more accessible in clinical settings.

Conclusion: Lexomics represents a significant advancement in AI-driven radiology, optimizing report utilization and communication. Future research should focus on addressing challenges such as data privacy, bias mitigation, and validation in diverse clinical scenarios to ensure ethical and effective implementation in radiological practice.

目的:将大语言模型(large language models, LLMs)应用于放射学报告,旨在增强对有意义的文本数据的提取,改善临床决策和患者管理。与图像分析中的放射组学类似,词汇组学旨在揭示放射学报告中的隐藏模式,以支持诊断、分类和结构化报告。方法:llm和自然语言处理(NLP)算法对放射学报告进行分析,提取相关信息,细化鉴别诊断,整合临床数据。这些模型处理结构化和非结构化文本,识别可能被忽视的模式和相关性。应用程序包括自动结构化报告、质量控制和增强的偶然和紧急发现的沟通。结果:法学硕士已经证明能够实时协助放射科医生,标准化分类,提高报告清晰度,并加强放射学报告与电子健康记录(EHRs)的集成。他们通过减少冗余、构建自由文本报告和检测临床相关见解来支持放射科医生。与放射组学不同,词汇组学需要最小的计算能力,使其更易于在临床环境中使用。结论:Lexomics代表了人工智能驱动放射学的重大进步,优化了报告的利用和交流。未来的研究应侧重于解决诸如数据隐私、减轻偏见和在不同临床情况下的验证等挑战,以确保在放射实践中伦理和有效地实施。
{"title":"Lexomics, or why to extract relevant information from radiology reports through LLMs.","authors":"Teodoro Martín-Noguerol, Pilar López-Úbeda, Carolina Díaz-Angulo, Antonio Luna","doi":"10.1007/s11548-025-03521-y","DOIUrl":"10.1007/s11548-025-03521-y","url":null,"abstract":"<p><strong>Purpose: </strong>The application of large language models (LLMs) to radiology reports aims to enhance the extraction of meaningful textual data, improving clinical decision-making and patient management. Similar to radiomics in image analysis, lexomics seeks to reveal hidden patterns in radiology reports to support diagnosis, classification, and structured reporting.</p><p><strong>Methods: </strong>LLMs and natural language processing (NLP) algorithms analyze radiology reports to extract relevant information, refine differential diagnoses, and integrate clinical data. These models process structured and unstructured text, identifying patterns and correlations that may otherwise go unnoticed. Applications include automated structured reporting, quality control, and enhanced communication of incidental and urgent findings.</p><p><strong>Results: </strong>LLMs have demonstrated the ability to assist radiologists in real-time, standardizing classifications, improving report clarity, and enhancing the integration of radiology reports into electronic health records (EHRs). They support radiologists by reducing redundancies, structuring free-text reports, and detecting clinically relevant insights. Unlike radiomics, lexomics requires minimal computational power, making it more accessible in clinical settings.</p><p><strong>Conclusion: </strong>Lexomics represents a significant advancement in AI-driven radiology, optimizing report utilization and communication. Future research should focus on addressing challenges such as data privacy, bias mitigation, and validation in diverse clinical scenarios to ensure ethical and effective implementation in radiological practice.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"223-225"},"PeriodicalIF":2.3,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145114545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
In-depth characterization of a laparoscopic radical prostatectomy procedure based on surgical process modeling. 基于手术过程建模的腹腔镜根治性前列腺切除术的深入表征。
IF 2.3 3区 医学 Q3 ENGINEERING, BIOMEDICAL Pub Date : 2026-02-01 Epub Date: 2025-12-03 DOI: 10.1007/s11548-025-03552-5
Nuno S Rodrigues, Pedro Morais, Lukas R Buschle, Estevão Lima, João L Vilaça

Purpose: Minimally invasive surgical approaches are currently the standard of care for men with prostate cancer, presenting higher rates of erectile function preservation. With these laparoscopic techniques, there is an increasing amount of data and information available. Adaptive systems can play an important role, acting as an intelligent information filter, assuring that all the available information can become useful for the procedure and not overwhelming for the surgeon. Standardizing and structuring the surgical workflow are key requirements for such smart assistants to recognize the different surgical steps through context information about the environment. This work aims to do a detailed characterization of a laparoscopic radical prostatectomy procedure, focusing on the formalization of medical expert knowledge, via surgical process modeling.

Methods: Data were acquired manually, via online and offline observation, and discussion with medical experts. A total of 14 procedures were observed. Both manual laparoscopic radical prostatectomy and robot-assisted laparoscopic prostatectomy were studied. The derived SPM focuses only on the intraoperatory part of the procedure, with constant feedback from the endoscopic camera. For surgery observation, a dedicated Excel template was developed.

Results: The final model is represented in a descriptive and numerical format, combining task description with a workflow diagram arrangement for ease of interpretation. Practical applications of the generated surgical process model are exemplified with the creation of activation trees for surgical phase identification. Anatomical structures are reported for each phase, distinguishing between visible and inferable ones. Additionally, the surgeons involved are identified, surgical instruments, and actions performed in each phase. A total of 11 phases were identified and characterized. Average surgery duration is 87 min.

Conclusion: The generated surgical process model is a first step toward the development of a context-aware surgical assistant and can potentially be used as a roadmap by other research teams, operating room managers and surgical teams.

目的:微创手术入路是目前前列腺癌患者的标准治疗方法,具有较高的勃起功能保留率。有了这些腹腔镜技术,有越来越多的数据和信息可用。自适应系统可以发挥重要作用,充当智能信息过滤器,确保所有可用信息对手术有用,而不会让外科医生不知所措。手术工作流程的标准化和结构化是智能助手通过环境上下文信息识别不同手术步骤的关键要求。本工作旨在通过手术过程建模,对腹腔镜根治性前列腺切除术进行详细描述,重点关注医学专家知识的形式化。方法:人工采集资料,通过线上、线下观察,并与医学专家讨论。共观察了14个手术过程。对人工腹腔镜根治性前列腺切除术和机器人辅助腹腔镜前列腺切除术进行了研究。导出的SPM仅关注术中部分的过程,并从内窥镜相机不断反馈。为手术观察,开发了专用的Excel模板。结果:最终模型以描述性和数字格式表示,将任务描述与易于解释的工作流图安排相结合。通过创建用于手术阶段识别的激活树,举例说明了生成的手术过程模型的实际应用。解剖结构报告了每个阶段,区分可见的和可推断的。此外,在每个阶段确定涉及的外科医生、手术器械和操作。共鉴定和表征了11个相。平均手术时间为87分钟。结论:所生成的手术过程模型是开发情境感知手术助手的第一步,可能被其他研究团队、手术室管理人员和手术团队用作路线图。
{"title":"In-depth characterization of a laparoscopic radical prostatectomy procedure based on surgical process modeling.","authors":"Nuno S Rodrigues, Pedro Morais, Lukas R Buschle, Estevão Lima, João L Vilaça","doi":"10.1007/s11548-025-03552-5","DOIUrl":"10.1007/s11548-025-03552-5","url":null,"abstract":"<p><strong>Purpose: </strong>Minimally invasive surgical approaches are currently the standard of care for men with prostate cancer, presenting higher rates of erectile function preservation. With these laparoscopic techniques, there is an increasing amount of data and information available. Adaptive systems can play an important role, acting as an intelligent information filter, assuring that all the available information can become useful for the procedure and not overwhelming for the surgeon. Standardizing and structuring the surgical workflow are key requirements for such smart assistants to recognize the different surgical steps through context information about the environment. This work aims to do a detailed characterization of a laparoscopic radical prostatectomy procedure, focusing on the formalization of medical expert knowledge, via surgical process modeling.</p><p><strong>Methods: </strong>Data were acquired manually, via online and offline observation, and discussion with medical experts. A total of 14 procedures were observed. Both manual laparoscopic radical prostatectomy and robot-assisted laparoscopic prostatectomy were studied. The derived SPM focuses only on the intraoperatory part of the procedure, with constant feedback from the endoscopic camera. For surgery observation, a dedicated Excel template was developed.</p><p><strong>Results: </strong>The final model is represented in a descriptive and numerical format, combining task description with a workflow diagram arrangement for ease of interpretation. Practical applications of the generated surgical process model are exemplified with the creation of activation trees for surgical phase identification. Anatomical structures are reported for each phase, distinguishing between visible and inferable ones. Additionally, the surgeons involved are identified, surgical instruments, and actions performed in each phase. A total of 11 phases were identified and characterized. Average surgery duration is 87 min.</p><p><strong>Conclusion: </strong>The generated surgical process model is a first step toward the development of a context-aware surgical assistant and can potentially be used as a roadmap by other research teams, operating room managers and surgical teams.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"279-289"},"PeriodicalIF":2.3,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145670873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
International Journal of Computer Assisted Radiology and Surgery
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1