首页 > 最新文献

Computer Assisted Surgery最新文献

英文 中文
Deep learning diagnosis model of spinal tuberculosis based on CT bone window gradient attention mechanism: multi-center study. 基于CT骨窗梯度注意机制的脊柱结核深度学习诊断模型:多中心研究。
IF 1.9 4区 医学 Q3 SURGERY Pub Date : 2026-12-01 Epub Date: 2025-12-29 DOI: 10.1080/24699322.2025.2599329
Sen Mo, Chong Liu, Jiang Xue, Jiarui Chen, Hao Li, Zhaojun Lu, Zhongxian Zhou, Xiaopeng Qin, Rongqing He, Boli Qin, Yahui Huang, Wei Wei, Xinli Zhan

Purpose: To develop a deep learning model based on CT bone window images to enhance the accuracy of early diagnosis of spinal tuberculosis.

Methods: This study adopted multicenter retrospective data (n = 1027). Firstly, the vertebral body region of the spine was extracted through the U-Net segmentation model. Then, the segmented images were input into the improved ResNet50 network. Combined with the CT bone window gradient attention mechanism, an end-to-end deep learning diagnostic model was constructed.

Results: In the internal validation datasets, the model achieved an AUC of 0.920, accuracy of 0.874 and sensitivity of 0.876. For External test datasets 1, the AUC was 0.867, accuracy 0.801 and sensitivity 0.794; for External test datasets 2, the AUC was 0.866, accuracy 0.769, and sensitivity 0.883; and for External test datasets 3, the AUC was 0.941, accuracy 0.843 and sensitivity 0.790.

Conclusion: The multi-center study built up a deep learning model for spinal tuberculosis diagnosis with the assist of the CT bone window gradient attention mechanism. The model achieved a good internal verification ability (AUC = 0.920, accuracy rate = 0.874) and external verification ability (AUC = 0.866-0.941, accuracy rate = 0.769-0.843) which showed the wide applicability of the model to different medical institutions. The main developments of this work are the good performances for features that extract relevant information about trabecular micro-fractures and calcification contours' gradients.

目的:建立基于CT骨窗图像的深度学习模型,提高脊柱结核早期诊断的准确性。方法:本研究采用多中心回顾性资料(n = 1027)。首先,通过U-Net分割模型提取脊柱的椎体区域;然后,将分割后的图像输入到改进的ResNet50网络中。结合CT骨窗梯度注意机制,构建端到端深度学习诊断模型。结果:在内部验证数据集中,该模型的AUC为0.920,准确度为0.874,灵敏度为0.876。对于外部测试数据集1,AUC为0.867,准确度为0.801,灵敏度为0.794;对于外部测试数据集2,AUC为0.866,准确度为0.769,灵敏度为0.883;对于外部测试数据集3,AUC为0.941,准确度为0.843,灵敏度为0.790。结论:多中心研究建立了CT骨窗梯度注意机制辅助脊柱结核诊断的深度学习模型。模型具有良好的内部验证能力(AUC = 0.920,准确率= 0.874)和外部验证能力(AUC = 0.866-0.941,准确率= 0.769-0.843),表明模型对不同医疗机构的适用性较广。这项工作的主要进展是提取小梁微骨折和钙化等高线梯度相关信息的特征具有良好的性能。
{"title":"Deep learning diagnosis model of spinal tuberculosis based on CT bone window gradient attention mechanism: multi-center study.","authors":"Sen Mo, Chong Liu, Jiang Xue, Jiarui Chen, Hao Li, Zhaojun Lu, Zhongxian Zhou, Xiaopeng Qin, Rongqing He, Boli Qin, Yahui Huang, Wei Wei, Xinli Zhan","doi":"10.1080/24699322.2025.2599329","DOIUrl":"10.1080/24699322.2025.2599329","url":null,"abstract":"<p><strong>Purpose: </strong>To develop a deep learning model based on CT bone window images to enhance the accuracy of early diagnosis of spinal tuberculosis.</p><p><strong>Methods: </strong>This study adopted multicenter retrospective data (<i>n</i> = 1027). Firstly, the vertebral body region of the spine was extracted through the U-Net segmentation model. Then, the segmented images were input into the improved ResNet50 network. Combined with the CT bone window gradient attention mechanism, an end-to-end deep learning diagnostic model was constructed.</p><p><strong>Results: </strong>In the internal validation datasets, the model achieved an AUC of 0.920, accuracy of 0.874 and sensitivity of 0.876. For External test datasets 1, the AUC was 0.867, accuracy 0.801 and sensitivity 0.794; for External test datasets 2, the AUC was 0.866, accuracy 0.769, and sensitivity 0.883; and for External test datasets 3, the AUC was 0.941, accuracy 0.843 and sensitivity 0.790.</p><p><strong>Conclusion: </strong>The multi-center study built up a deep learning model for spinal tuberculosis diagnosis with the assist of the CT bone window gradient attention mechanism. The model achieved a good internal verification ability (AUC = 0.920, accuracy rate = 0.874) and external verification ability (AUC = 0.866-0.941, accuracy rate = 0.769-0.843) which showed the wide applicability of the model to different medical institutions. The main developments of this work are the good performances for features that extract relevant information about trabecular micro-fractures and calcification contours' gradients.</p>","PeriodicalId":56051,"journal":{"name":"Computer Assisted Surgery","volume":"31 1","pages":"2599329"},"PeriodicalIF":1.9,"publicationDate":"2026-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145851838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Rebuilding the pelvis: advances in robotic-assisted management of complex pelvic fractures. 骨盆重建:复杂骨盆骨折的机器人辅助治疗进展。
IF 1.9 4区 医学 Q3 SURGERY Pub Date : 2026-12-01 Epub Date: 2026-01-16 DOI: 10.1080/24699322.2026.2615212
Linpeng Ge, Long Wang, Leilei Duan, Taiyuan Zhang

Complex pelvic fractures are infamously challenging to fix surgically because of their fine anatomy and proximity to vital neurovascular structures. Traditional open reduction and internal fixation (ORIF) improves stability but is complicated by excessive blood loss, longer operative time, and morbidity. Robotic-assisted surgical methods, i.e. Robot-Assisted Fracture Reduction (RAFR) and the TiRobot platform, provide a paradigm shift toward precise, minimally invasive fracture reduction and fixation. The RAFR system blends preoperative high-definition 3D CT imaging with intraoperative cone-beam CT and real-time navigation for dynamic visualization and accurate fragment control to eliminate guesswork and minimize the risk of malposition. Its cutting-edge robotic arm, electrically actuated holding devices, and elastic counterforces of traction ensure controlled and safe fracture reduction with soft tissue and neurovascular integrity preservation. Robot-assisted support is assisted by extensive clinical evidence to enhance the accuracy of surgery with sub-millimeter positioning discrepancies, reduce intraoperative blood loss, reduce exposure to radiation, reduce operative and hospital stay times, and enhance functional restoration according to scores demonstrated. Robot over conventional techniques reduces postoperative infection, implant loosening, nonunion, and nerve or vessel injury. TiRobot enhances fixation using artificial intelligence-assisted screw path planning and navigation. Albeit promising, it has limitations in adoption, such as being costly, having no feedback, and a high learning curve. More multicenter randomized clinical trials are required to estimate long-term efficacy, safety, and cost-effectiveness. Robot-assisted pelvic fracture surgery is a leading-edge development that has the ability to improve patient outcomes and the delivery of trauma care.

复杂骨盆骨折由于其精细的解剖结构和靠近重要的神经血管结构,手术修复是非常具有挑战性的。传统的切开复位内固定(ORIF)提高了稳定性,但因失血过多、手术时间较长和发病率而复杂化。机器人辅助手术方法,即机器人辅助骨折复位(RAFR)和TiRobot平台,为精确、微创骨折复位和固定提供了范式转变。RAFR系统将术前高清3D CT成像与术中锥束CT和实时导航相结合,实现动态可视化和精确碎片控制,消除猜测,最大限度地降低错位风险。其尖端的机械臂、电动控制装置和弹性牵引反力确保可控和安全的骨折复位,同时保持软组织和神经血管的完整性。广泛的临床证据支持机器人辅助支持,以提高亚毫米定位差异的手术准确性,减少术中出血量,减少辐射暴露,减少手术和住院时间,并根据评分增强功能恢复。与传统技术相比,机器人减少了术后感染、植入物松动、不愈合和神经或血管损伤。TiRobot通过人工智能辅助的螺旋路径规划和导航来增强固定。尽管很有前途,但它在采用方面也有局限性,比如成本高、没有反馈、学习曲线高。需要更多的多中心随机临床试验来评估长期疗效、安全性和成本效益。机器人辅助骨盆骨折手术是一项前沿发展,它有能力改善患者的预后和创伤护理的交付。
{"title":"Rebuilding the pelvis: advances in robotic-assisted management of complex pelvic fractures.","authors":"Linpeng Ge, Long Wang, Leilei Duan, Taiyuan Zhang","doi":"10.1080/24699322.2026.2615212","DOIUrl":"https://doi.org/10.1080/24699322.2026.2615212","url":null,"abstract":"<p><p>Complex pelvic fractures are infamously challenging to fix surgically because of their fine anatomy and proximity to vital neurovascular structures. Traditional open reduction and internal fixation (ORIF) improves stability but is complicated by excessive blood loss, longer operative time, and morbidity. Robotic-assisted surgical methods, i.e. Robot-Assisted Fracture Reduction (RAFR) and the TiRobot platform, provide a paradigm shift toward precise, minimally invasive fracture reduction and fixation. The RAFR system blends preoperative high-definition 3D CT imaging with intraoperative cone-beam CT and real-time navigation for dynamic visualization and accurate fragment control to eliminate guesswork and minimize the risk of malposition. Its cutting-edge robotic arm, electrically actuated holding devices, and elastic counterforces of traction ensure controlled and safe fracture reduction with soft tissue and neurovascular integrity preservation. Robot-assisted support is assisted by extensive clinical evidence to enhance the accuracy of surgery with sub-millimeter positioning discrepancies, reduce intraoperative blood loss, reduce exposure to radiation, reduce operative and hospital stay times, and enhance functional restoration according to scores demonstrated. Robot over conventional techniques reduces postoperative infection, implant loosening, nonunion, and nerve or vessel injury. TiRobot enhances fixation using artificial intelligence-assisted screw path planning and navigation. Albeit promising, it has limitations in adoption, such as being costly, having no feedback, and a high learning curve. More multicenter randomized clinical trials are required to estimate long-term efficacy, safety, and cost-effectiveness. Robot-assisted pelvic fracture surgery is a leading-edge development that has the ability to improve patient outcomes and the delivery of trauma care.</p>","PeriodicalId":56051,"journal":{"name":"Computer Assisted Surgery","volume":"31 1","pages":"2615212"},"PeriodicalIF":1.9,"publicationDate":"2026-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145991672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robotic-assisted versus standard laparoscopic surgery for colorectal cancer in obese patients: a systematic review and meta-analysis. 机器人辅助与标准腹腔镜手术治疗肥胖患者的结直肠癌:系统回顾和荟萃分析。
IF 1.9 4区 医学 Q3 SURGERY Pub Date : 2026-12-01 Epub Date: 2025-12-27 DOI: 10.1080/24699322.2025.2604610
Hongbin Wu, Lei Zheng, Tonghai Xu, Bin Zhao, Huawu Yang

Colorectal cancer represents a major global health concern and obesity adds complicates its surgical management. This meta-analysis aimed to evaluate the comparative effectiveness and safety of robotic-assisted surgery and standard laparoscopic surgery in obese colorectal cancer patients. A comprehensive literature search performed across databases from inception to April 2024. Pooled estimates included hospital stay duration, drainage tube removal time, first ventilation time, complication rates, re-admission rates and re-operative rates. Six studies involving 4215 patients were included. Robotic-assisted surgery was associated with a statistically significant but modest reduction in hospital stay time compared to laparoscopic surgery (p = 0.02). No significant differences were found for drainage tube removal time (p = 0.42) and first ventilation time (p = 0.27). Complication rates (OR [odds ratio] = 0.92, 95% confidence interval [CI]: 0.74 to 1.13, p = 0.41), re-admission rates (OR = 0.81, 95% CI: 0.31 to 2.13, p = 0.67) and re-operative rates (OR = 1.20, 95% CI: 0.77 to 1.86, p = 0.41) did not significantly differ between surgical approaches. Robotic-assisted surgery significantly provides a modest reduction in hospital stay duration without compromising patient safety for obese colorectal cancer patients. These findings should be interpreted with caution. Future randomized controlled trials are required to confirm these results.

结直肠癌是一个主要的全球健康问题,肥胖增加了其手术治疗的复杂性。本荟萃分析旨在评估机器人辅助手术和标准腹腔镜手术治疗肥胖结直肠癌患者的有效性和安全性。从建立到2024年4月,在数据库中进行了全面的文献检索。汇总估计包括住院时间、拔管时间、首次通气时间、并发症发生率、再入院率和再手术率。纳入了6项研究,涉及4215名患者。与腹腔镜手术相比,机器人辅助手术与统计学上显著但适度的住院时间减少相关(p = 0.02)。拔管时间(p = 0.42)和首次通气时间(p = 0.27)差异无统计学意义。并发症发生率(OR[优势比]= 0.92,95%可信区间[CI]: 0.74 ~ 1.13, p = 0.41)、再入院率(OR = 0.81, 95% CI: 0.31 ~ 2.13, p = 0.67)和再手术率(OR = 1.20, 95% CI: 0.77 ~ 1.86, p = 0.41)在手术入路之间无显著差异。机器人辅助手术显着减少了肥胖结直肠癌患者的住院时间,同时不影响患者的安全。这些发现应该谨慎解读。需要未来的随机对照试验来证实这些结果。
{"title":"Robotic-assisted versus standard laparoscopic surgery for colorectal cancer in obese patients: a systematic review and meta-analysis.","authors":"Hongbin Wu, Lei Zheng, Tonghai Xu, Bin Zhao, Huawu Yang","doi":"10.1080/24699322.2025.2604610","DOIUrl":"10.1080/24699322.2025.2604610","url":null,"abstract":"<p><p>Colorectal cancer represents a major global health concern and obesity adds complicates its surgical management. This meta-analysis aimed to evaluate the comparative effectiveness and safety of robotic-assisted surgery and standard laparoscopic surgery in obese colorectal cancer patients. A comprehensive literature search performed across databases from inception to April 2024. Pooled estimates included hospital stay duration, drainage tube removal time, first ventilation time, complication rates, re-admission rates and re-operative rates. Six studies involving 4215 patients were included. Robotic-assisted surgery was associated with a statistically significant but modest reduction in hospital stay time compared to laparoscopic surgery (<i>p</i> = 0.02). No significant differences were found for drainage tube removal time (<i>p</i> = 0.42) and first ventilation time (<i>p</i> = 0.27). Complication rates (OR [odds ratio] = 0.92, 95% confidence interval [CI]: 0.74 to 1.13, <i>p</i> = 0.41), re-admission rates (OR = 0.81, 95% CI: 0.31 to 2.13, <i>p</i> = 0.67) and re-operative rates (OR = 1.20, 95% CI: 0.77 to 1.86, <i>p</i> = 0.41) did not significantly differ between surgical approaches. Robotic-assisted surgery significantly provides a modest reduction in hospital stay duration without compromising patient safety for obese colorectal cancer patients. These findings should be interpreted with caution. Future randomized controlled trials are required to confirm these results.</p>","PeriodicalId":56051,"journal":{"name":"Computer Assisted Surgery","volume":"31 1","pages":"2604610"},"PeriodicalIF":1.9,"publicationDate":"2026-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145844299","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-factorial biomechanical evaluation of plate-screw fixation in femoral shaft fractures using numerical and machine learning approaches. 应用数值和机器学习方法对股骨骨干骨折钢板-螺钉内固定的多因素生物力学评价。
IF 1.9 4区 医学 Q3 SURGERY Pub Date : 2026-12-01 Epub Date: 2025-12-21 DOI: 10.1080/24699322.2025.2605494
Halil Burak Mutu

Plate and screw fixation is a widely used method in the surgical treatment of femoral shaft fractures; however, mechanical performance may vary depending on implant material, fracture gap size and loading conditions. This study aimed to investigate the biomechanical behavior of femoral shaft fractures stabilized with plate and screw fixation by applying finite element analysis (FEA) and to evaluate the predictive performance of machine learning (ML) algorithms based on numerical results. Three different fracture gap sizes (1, 2,and 3 mm) were modeled on a femur geometry, and axial loads ranging from 400 N to 1200 N (in 100 N increments) were applied. Two implant materials, Ti-6Al-4V and 316 L stainless steel (SS), were assessed. The stress distribution on the plate and first screw and the displacements at the femoral head and fracture site were analyzed using two different mesh densities. Subsequently, ML algorithms including Decision Tree (DT), Multilayer Perceptron (MLP) and Support Vector Machine (SVM) were used to predict the stress and displacement values based on the numerical dataset. The finer mesh provided more accurate results. Ti-6Al-4V showed lower von Mises stress values and displacement magnitudes compared to 316 L SS. Among the ML methods, MLP and SVM demonstrated better prediction accuracy than DT. The integration of FEA and ML techniques enables efficient prediction of implant biomechanics, offering a promising approach for preclinical evaluation and optimization of orthopedic fixation systems.

钢板螺钉固定是股骨干骨折手术治疗中广泛采用的方法;然而,机械性能可能因种植体材料、断裂间隙大小和加载条件而异。本研究旨在通过有限元分析(FEA)研究经钢板和螺钉固定稳定的股骨干骨折的生物力学行为,并基于数值结果评估机器学习(ML)算法的预测性能。在股骨几何模型上模拟了三种不同的骨折间隙尺寸(1、2和3 mm),并施加了400至1200牛的轴向载荷(以100牛为增量)。两种种植体材料,Ti-6Al-4V和316l不锈钢(SS)进行评估。采用两种不同的网格密度分析钢板和第一颗螺钉的应力分布以及股骨头和骨折部位的位移。随后,利用决策树(DT)、多层感知器(MLP)和支持向量机(SVM)等机器学习算法对数值数据集进行应力和位移预测。网格越细,结果越准确。与316 L SS相比,Ti-6Al-4V具有更低的von Mises应力值和位移幅度。在ML方法中,MLP和SVM的预测精度优于DT。FEA和ML技术的集成能够有效地预测植入物的生物力学,为骨科固定系统的临床前评估和优化提供了一种有前途的方法。
{"title":"Multi-factorial biomechanical evaluation of plate-screw fixation in femoral shaft fractures using numerical and machine learning approaches.","authors":"Halil Burak Mutu","doi":"10.1080/24699322.2025.2605494","DOIUrl":"https://doi.org/10.1080/24699322.2025.2605494","url":null,"abstract":"<p><p>Plate and screw fixation is a widely used method in the surgical treatment of femoral shaft fractures; however, mechanical performance may vary depending on implant material, fracture gap size and loading conditions. This study aimed to investigate the biomechanical behavior of femoral shaft fractures stabilized with plate and screw fixation by applying finite element analysis (FEA) and to evaluate the predictive performance of machine learning (ML) algorithms based on numerical results. Three different fracture gap sizes (1, 2,and 3 mm) were modeled on a femur geometry, and axial loads ranging from 400 N to 1200 N (in 100 N increments) were applied. Two implant materials, Ti-6Al-4V and 316 L stainless steel (SS), were assessed. The stress distribution on the plate and first screw and the displacements at the femoral head and fracture site were analyzed using two different mesh densities. Subsequently, ML algorithms including Decision Tree (DT), Multilayer Perceptron (MLP) and Support Vector Machine (SVM) were used to predict the stress and displacement values based on the numerical dataset. The finer mesh provided more accurate results. Ti-6Al-4V showed lower von Mises stress values and displacement magnitudes compared to 316 L SS. Among the ML methods, MLP and SVM demonstrated better prediction accuracy than DT. The integration of FEA and ML techniques enables efficient prediction of implant biomechanics, offering a promising approach for preclinical evaluation and optimization of orthopedic fixation systems.</p>","PeriodicalId":56051,"journal":{"name":"Computer Assisted Surgery","volume":"31 1","pages":"2605494"},"PeriodicalIF":1.9,"publicationDate":"2026-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145806169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The impact of three-dimensional simulation and virtual reality technologies on surgical decision-making and postoperative satisfaction in aesthetic surgery: a preliminary study. 三维模拟和虚拟现实技术对美容手术手术决策和术后满意度影响的初步研究。
IF 1.9 4区 医学 Q3 SURGERY Pub Date : 2026-12-01 Epub Date: 2026-01-10 DOI: 10.1080/24699322.2026.2614532
Mert Ersan, Hasan Demirbaşoğlu, Begüm Kolcu, Hilal Aybüke Yıldız

Background: Three-dimensional (3D) simulation and virtual reality (VR) technologies are increasingly used in aesthetic surgery consultations to enhance decision-making and expectation management. However, their impact on surgical decision-making and postoperative satisfaction across different procedures remains unclear.

Objectives: This study aimed to evaluate the influence of 3D simulation and VR technology in patients undergoing rhinoplasty, breast augmentation, mastopexy, augmentation-mastopexy and breast reduction.

Methods: A retrospective study was conducted with 75 female patients who underwent primary aesthetic surgery. Preoperative 3D simulations and VR visualizations were generated using the Crisalix Virtual Esthetics system (Crisalix S.A., Switzerland). Patients were assessed postoperatively at one year using structured surveys to evaluate the influence of 3D simulation and VR technology on their decision-making and satisfaction. Statistical analyses included the Kruskal-Wallis H test, the Chi-Square test, and Spearman's correlation.

Results: 3D simulation had the greatest influence on breast augmentation (8.4/10), rhinoplasty (7.6/10), and augmentation-mastopexy (7.1/10) patients but was less impactful for mastopexy (6.6/10) and breast reduction (3.8/10) patients (p < 0.001). The most decisive factors were previous patient photos (30.7%) and communication with the surgeon (29.3%), with simulation ranking third (18.7%). Postoperative similarity ratings were highest in breast augmentation (7.9/10) and rhinoplasty (7.5/10) patients. While 70.7% of patients would recommend 3D simulation, VR headset use did not influence decisions (p < 0.001).

Conclusions: 3D simulation enhances patient engagement and expectation management across various aesthetic procedures. While its influence is more significant in surgeries primarily focused on aesthetic outcomes, it serves as a complementary tool rather than a definitive factor in decision-making.

背景:三维(3D)模拟和虚拟现实(VR)技术越来越多地应用于美容外科会诊,以增强决策和期望管理。然而,它们对不同手术过程的手术决策和术后满意度的影响尚不清楚。目的:本研究旨在评估3D模拟和VR技术对隆鼻、隆胸、乳房切除术、隆胸-乳房切除术和缩乳患者的影响。方法:回顾性分析75例接受初级美容手术的女性患者。术前3D模拟和VR可视化使用Crisalix虚拟美学系统(Crisalix s.a.,瑞士)生成。术后1年对患者进行结构化调查,评估3D模拟和VR技术对患者决策和满意度的影响。统计分析包括Kruskal-Wallis H检验、卡方检验和Spearman相关检验。结果:3D模拟对隆胸(8.4/10)、隆鼻(7.6/10)和隆胸-乳房填充术(7.1/10)患者的影响最大,但对乳房填充术(6.6/10)和乳房缩小(3.8/10)患者的影响较小(p)结论:3D模拟增强了各种美容手术的患者参与度和期望管理。虽然它的影响在主要关注美学结果的手术中更为重要,但它是一种补充工具,而不是决策的决定性因素。
{"title":"The impact of three-dimensional simulation and virtual reality technologies on surgical decision-making and postoperative satisfaction in aesthetic surgery: a preliminary study.","authors":"Mert Ersan, Hasan Demirbaşoğlu, Begüm Kolcu, Hilal Aybüke Yıldız","doi":"10.1080/24699322.2026.2614532","DOIUrl":"https://doi.org/10.1080/24699322.2026.2614532","url":null,"abstract":"<p><strong>Background: </strong>Three-dimensional (3D) simulation and virtual reality (VR) technologies are increasingly used in aesthetic surgery consultations to enhance decision-making and expectation management. However, their impact on surgical decision-making and postoperative satisfaction across different procedures remains unclear.</p><p><strong>Objectives: </strong>This study aimed to evaluate the influence of 3D simulation and VR technology in patients undergoing rhinoplasty, breast augmentation, mastopexy, augmentation-mastopexy and breast reduction.</p><p><strong>Methods: </strong>A retrospective study was conducted with 75 female patients who underwent primary aesthetic surgery. Preoperative 3D simulations and VR visualizations were generated using the Crisalix Virtual Esthetics system (Crisalix S.A., Switzerland). Patients were assessed postoperatively at one year using structured surveys to evaluate the influence of 3D simulation and VR technology on their decision-making and satisfaction. Statistical analyses included the Kruskal-Wallis H test, the Chi-Square test, and Spearman's correlation.</p><p><strong>Results: </strong>3D simulation had the greatest influence on breast augmentation (8.4/10), rhinoplasty (7.6/10), and augmentation-mastopexy (7.1/10) patients but was less impactful for mastopexy (6.6/10) and breast reduction (3.8/10) patients (<i>p</i> < 0.001). The most decisive factors were previous patient photos (30.7%) and communication with the surgeon (29.3%), with simulation ranking third (18.7%). Postoperative similarity ratings were highest in breast augmentation (7.9/10) and rhinoplasty (7.5/10) patients. While 70.7% of patients would recommend 3D simulation, VR headset use did not influence decisions (<i>p</i> < 0.001).</p><p><strong>Conclusions: </strong>3D simulation enhances patient engagement and expectation management across various aesthetic procedures. While its influence is more significant in surgeries primarily focused on aesthetic outcomes, it serves as a complementary tool rather than a definitive factor in decision-making.</p>","PeriodicalId":56051,"journal":{"name":"Computer Assisted Surgery","volume":"31 1","pages":"2614532"},"PeriodicalIF":1.9,"publicationDate":"2026-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145949422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
No cancer left behind: a testbed and demonstration of concept for photoacoustic tumor bed inspection. 不留下任何癌症:光声肿瘤床检查的试验台和概念演示。
IF 1.9 4区 医学 Q3 SURGERY Pub Date : 2026-12-01 Epub Date: 2025-12-31 DOI: 10.1080/24699322.2025.2604123
Laura Connolly, Hyunwoo Song, Keshuai Xu, Anton Deguet, Simon Leonard, Gabor Fichtinger, Parvin Mousavi, Russell H Taylor, Emad Boctor

Cancer resection surgery is unsuccessful if tumor tissue is left behind in the surgical cavity. Identifying the residual cancer requires additional imaging or postoperative histological analysis. Photoacoustic imaging can be used to image both the surface and depths of the resection cavity; however, its performance hinges on consistent probe placement and stable acoustic and optical coupling. As intra-cavity deployment of photoacoustic imaging is largely uncharted, several potential embodiments warrant rigorous investigation. We address this need with an open-source robotic testbed for intraoperative tumor-bed inspection using photoacoustic imaging. The platform integrates the da Vinci Research Kit, depth imaging, and electromagnetic tracking to automate cavity scanning and maintain repeatable probe trajectories. Using tissue-mimicking phantoms, we (i) demonstrate a novel imaging embodiment for photoacoustic tumor-bed inspection and (ii) show how this testbed can be used to investigate and optimize tumor bed inspection strategies and configurations. This study establishes the feasibility of detecting and mapping residual cancer within a simulated surgical cavity. The primary contribution is the testbed itself, designed for integration with existing surgical navigation workflows and rapid prototyping. This testbed serves as an essential foundation for systematic evaluation of photoacoustic, robot-assisted strategies for improving intraoperative margin assessment.

如果肿瘤组织残留在手术腔内,肿瘤切除手术是不成功的。鉴别残留癌需要额外的影像学检查或术后组织学分析。光声成像可以对切除腔体的表面和深度进行成像;然而,它的性能取决于一致的探头放置和稳定的声光耦合。由于光声成像的腔内部署在很大程度上是未知的,几个潜在的实施例需要严格的调查。为了满足这一需求,我们设计了一个开源机器人试验台,用于术中使用光声成像进行肿瘤床检查。该平台集成了达芬奇研究套件、深度成像和电磁跟踪,以自动扫描腔体并保持可重复的探针轨迹。利用组织模拟模型,我们(i)展示了一种用于光声肿瘤床检查的新型成像实施例,(ii)展示了该试验台如何用于研究和优化肿瘤床检查策略和配置。本研究建立了在模拟手术腔内检测和定位残余肿瘤的可行性。主要的贡献是试验台本身,设计用于集成现有的手术导航工作流程和快速原型。该测试平台可作为系统评估光声、机器人辅助策略的重要基础,以改善术中边缘评估。
{"title":"No cancer left behind: a testbed and demonstration of concept for photoacoustic tumor bed inspection.","authors":"Laura Connolly, Hyunwoo Song, Keshuai Xu, Anton Deguet, Simon Leonard, Gabor Fichtinger, Parvin Mousavi, Russell H Taylor, Emad Boctor","doi":"10.1080/24699322.2025.2604123","DOIUrl":"https://doi.org/10.1080/24699322.2025.2604123","url":null,"abstract":"<p><p>Cancer resection surgery is unsuccessful if tumor tissue is left behind in the surgical cavity. Identifying the residual cancer requires additional imaging or postoperative histological analysis. Photoacoustic imaging can be used to image both the surface and depths of the resection cavity; however, its performance hinges on consistent probe placement and stable acoustic and optical coupling. As intra-cavity deployment of photoacoustic imaging is largely uncharted, several potential embodiments warrant rigorous investigation. We address this need with an open-source robotic testbed for intraoperative tumor-bed inspection using photoacoustic imaging. The platform integrates the da Vinci Research Kit, depth imaging, and electromagnetic tracking to automate cavity scanning and maintain repeatable probe trajectories. Using tissue-mimicking phantoms, we (i) demonstrate a novel imaging embodiment for photoacoustic tumor-bed inspection and (ii) show how this testbed can be used to investigate and optimize tumor bed inspection strategies and configurations. This study establishes the feasibility of detecting and mapping residual cancer within a simulated surgical cavity. The primary contribution is the testbed itself, designed for integration with existing surgical navigation workflows and rapid prototyping. This testbed serves as an essential foundation for systematic evaluation of photoacoustic, robot-assisted strategies for improving intraoperative margin assessment.</p>","PeriodicalId":56051,"journal":{"name":"Computer Assisted Surgery","volume":"31 1","pages":"2604123"},"PeriodicalIF":1.9,"publicationDate":"2026-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145879506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reinforcement learning for medical image analysis: a systematic review of algorithms, engineering challenges, and clinical deployment. 用于医学图像分析的强化学习:对算法、工程挑战和临床部署的系统回顾。
IF 1.9 4区 医学 Q3 SURGERY Pub Date : 2026-12-01 Epub Date: 2025-12-26 DOI: 10.1080/24699322.2025.2597553
Masuda Begum Sampa, Nor Hidayati Abdul Aziz, Md Siddikur Rahman, Nor Azlina Ab Aziz, Rosli Besar, Anith Khairunnisa Ghazali

Reinforcement learning (RL) has emerged as a powerful artificial intelligence paradigm in medical image analysis, excelling in complex decision-making tasks. This systematic review synthesizes the applications of RL across diverse imaging domains-including landmark detection, image segmentation, lesion identification, disease diagnosis, and image registration-by analyzing 20 peer-reviewed studies published between 2019 and 2023. RL methods are categorized into classical and deep reinforcement learning (DRL) approaches, focusing on their performance, integration with other machine learning models, and clinical utility. Deep Q-Networks (DQN) demonstrated strong performance in anatomical landmark detection and cardiovascular risk estimation, while Proximal Policy Optimization (PPO) and Advantage Actor-Critic (A2C) achieved optimal policy learning for vessel tracking. Policy gradient methods such as REINFORCE, Twin-Delayed Deep Deterministic Policy Gradient (TD3), and Soft Actor-Critic (SAC) were successfully applied to breast lesion detection, white-matter connectivity analysis, and vertebral segmentation.Monte Carlo learning, meta-RL, and A3C methods proved effective for adaptive questioning, image quality evaluation, and multimodal image registration. To consolidate these findings, we propose a unified Reinforcement Learning Medical Imaging (RLMI) framework encompassing four core components: state representation, policy optimization, reward formulation, and environment modeling. This framework enhances sequential agent learning, stabilizes navigation, and generalizes across imaging modalities and tasks. Key challenges remain, including optimizing task-specific policies, integrating anatomical contexts, addressing data scarcity, and improving interpretability. This review highlights RL's potential to enhance accuracy, adaptability, and efficiency in medical image analysis, providing valuable guidance for researchers and clinicians applying RL in real-world healthcare settings.

强化学习(RL)已经成为医学图像分析领域一个强大的人工智能范例,在复杂的决策任务中表现出色。本系统综述通过分析2019年至2023年间发表的20项同行评议研究,综合了RL在不同成像领域的应用,包括地标检测、图像分割、病变识别、疾病诊断和图像配准。RL方法分为经典强化学习和深度强化学习(DRL)方法,主要关注它们的性能、与其他机器学习模型的集成以及临床实用性。Deep Q-Networks (DQN)在解剖地标检测和心血管风险估计方面表现出色,而Proximal Policy Optimization (PPO)和Advantage Actor-Critic (A2C)在血管跟踪方面实现了最佳策略学习。策略梯度方法,如强化、双延迟深度确定性策略梯度(TD3)和软行为者-批评家(SAC)已成功应用于乳腺病变检测、白质连通性分析和椎体分割。蒙特卡罗学习、元强化学习和A3C方法被证明对自适应提问、图像质量评估和多模态图像配准是有效的。为了巩固这些发现,我们提出了一个统一的强化学习医学成像(RLMI)框架,该框架包含四个核心组件:状态表示、政策优化、奖励制定和环境建模。该框架增强了顺序代理学习,稳定了导航,并推广了成像模式和任务。关键的挑战仍然存在,包括优化特定任务的策略、整合解剖背景、解决数据稀缺问题以及提高可解释性。这篇综述强调了RL在提高医学图像分析的准确性、适应性和效率方面的潜力,为研究人员和临床医生在现实医疗环境中应用RL提供了有价值的指导。
{"title":"Reinforcement learning for medical image analysis: a systematic review of algorithms, engineering challenges, and clinical deployment.","authors":"Masuda Begum Sampa, Nor Hidayati Abdul Aziz, Md Siddikur Rahman, Nor Azlina Ab Aziz, Rosli Besar, Anith Khairunnisa Ghazali","doi":"10.1080/24699322.2025.2597553","DOIUrl":"10.1080/24699322.2025.2597553","url":null,"abstract":"<p><p>Reinforcement learning (RL) has emerged as a powerful artificial intelligence paradigm in medical image analysis, excelling in complex decision-making tasks. This systematic review synthesizes the applications of RL across diverse imaging domains-including landmark detection, image segmentation, lesion identification, disease diagnosis, and image registration-by analyzing 20 peer-reviewed studies published between 2019 and 2023. RL methods are categorized into classical and deep reinforcement learning (DRL) approaches, focusing on their performance, integration with other machine learning models, and clinical utility. Deep Q-Networks (DQN) demonstrated strong performance in anatomical landmark detection and cardiovascular risk estimation, while Proximal Policy Optimization (PPO) and Advantage Actor-Critic (A2C) achieved optimal policy learning for vessel tracking. Policy gradient methods such as REINFORCE, Twin-Delayed Deep Deterministic Policy Gradient (TD3), and Soft Actor-Critic (SAC) were successfully applied to breast lesion detection, white-matter connectivity analysis, and vertebral segmentation.Monte Carlo learning, meta-RL, and A3C methods proved effective for adaptive questioning, image quality evaluation, and multimodal image registration. To consolidate these findings, we propose a unified Reinforcement Learning Medical Imaging (RLMI) framework encompassing four core components: state representation, policy optimization, reward formulation, and environment modeling. This framework enhances sequential agent learning, stabilizes navigation, and generalizes across imaging modalities and tasks. Key challenges remain, including optimizing task-specific policies, integrating anatomical contexts, addressing data scarcity, and improving interpretability. This review highlights RL's potential to enhance accuracy, adaptability, and efficiency in medical image analysis, providing valuable guidance for researchers and clinicians applying RL in real-world healthcare settings.</p>","PeriodicalId":56051,"journal":{"name":"Computer Assisted Surgery","volume":"31 1","pages":"2597553"},"PeriodicalIF":1.9,"publicationDate":"2026-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145835369","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HiEndo: harnessing large-scale data for generating high-resolution laparoscopy videos under a two-stage framework. HiEndo:在两阶段框架下利用大规模数据生成高分辨率腹腔镜视频。
IF 1.5 4区 医学 Q3 SURGERY Pub Date : 2025-12-01 Epub Date: 2025-07-25 DOI: 10.1080/24699322.2025.2536643
Zhao Wang, Yeqian Zhang, Jiayi Gu, Yueyao Chen, Yonghao Long, Xiang Xia, Puhua Zhang, Chunchao Zhu, Zerui Wang, Qi Dou, Zheng Wang, Zizhen Zhang

Recent success in generative AI has demonstrated great potential in various medical scenarios. However, how to generate realistic and high-fidelity gastrointestinal laparoscopy videos still lacks exploration. A recent work, Endora, proposes a basic generation model for a gastrointestinal laparoscopy scenario, producing low-resolution laparoscopy videos, which can not meet the real needs in robotic surgery. Regarding this issue, we propose an innovative two-stage video generation architecture HiEndo for generating high-resolution gastrointestinal laparoscopy videos with high fidelity. In the first stage, we build a diffusion transformer for generating a low-resolution laparoscopy video upon the basic capability of Endora as an initial start. In the second stage, we further design a super resolution module to improve the resolution of initial video and refine the fine-grained details. With these two stages, we could obtain high-resolution realistic laparoscopy videos with high fidelity, which can meet the real-world clinical usage. We also collect a large-scale gastrointestinal laparoscopy video dataset with 61,270 video clips for training and validation of our proposed method. Extensive experimental results have demonstrate the effectiveness of our proposed framework. For example, our model achieves 15.1% Fréchet Video Distance and 3.7% F1 score improvements compared with the previous state-of-the-art method.

最近生成式人工智能的成功在各种医疗场景中显示出巨大的潜力。然而,如何生成逼真、高保真的胃肠腹腔镜视频仍缺乏探索。最近的一项工作,Endora,提出了一个胃肠腹腔镜场景的基本生成模型,产生低分辨率的腹腔镜视频,不能满足机器人手术的实际需求。针对这一问题,我们提出了一种创新的两阶段视频生成架构HiEndo,用于生成高保真的高分辨率胃肠腹腔镜视频。在第一阶段,我们建立了一个扩散变压器,以产生低分辨率的腹腔镜视频为基础的基本能力作为初始启动。在第二阶段,我们进一步设计了超分辨率模块,以提高初始视频的分辨率,细化细粒度细节。通过这两个阶段,我们可以获得高分辨率、高保真度的真实腹腔镜视频,满足现实世界的临床使用。我们还收集了一个包含61,270个视频片段的大型胃肠腹腔镜视频数据集,用于训练和验证我们提出的方法。大量的实验结果证明了该框架的有效性。例如,与之前最先进的方法相比,我们的模型实现了15.1%的fr视频距离和3.7%的F1分数提高。
{"title":"HiEndo: harnessing large-scale data for generating high-resolution laparoscopy videos under a two-stage framework.","authors":"Zhao Wang, Yeqian Zhang, Jiayi Gu, Yueyao Chen, Yonghao Long, Xiang Xia, Puhua Zhang, Chunchao Zhu, Zerui Wang, Qi Dou, Zheng Wang, Zizhen Zhang","doi":"10.1080/24699322.2025.2536643","DOIUrl":"https://doi.org/10.1080/24699322.2025.2536643","url":null,"abstract":"<p><p>Recent success in generative AI has demonstrated great potential in various medical scenarios. However, how to generate realistic and high-fidelity gastrointestinal laparoscopy videos still lacks exploration. A recent work, Endora, proposes a basic generation model for a gastrointestinal laparoscopy scenario, producing low-resolution laparoscopy videos, which can not meet the real needs in robotic surgery. Regarding this issue, we propose an innovative two-stage video generation architecture HiEndo for generating high-resolution gastrointestinal laparoscopy videos with high fidelity. In the first stage, we build a diffusion transformer for generating a low-resolution laparoscopy video upon the basic capability of Endora as an initial start. In the second stage, we further design a super resolution module to improve the resolution of initial video and refine the fine-grained details. With these two stages, we could obtain high-resolution realistic laparoscopy videos with high fidelity, which can meet the real-world clinical usage. We also collect a large-scale gastrointestinal laparoscopy video dataset with 61,270 video clips for training and validation of our proposed method. Extensive experimental results have demonstrate the effectiveness of our proposed framework. For example, our model achieves 15.1% Fréchet Video Distance and 3.7% F1 score improvements compared with the previous state-of-the-art method.</p>","PeriodicalId":56051,"journal":{"name":"Computer Assisted Surgery","volume":"30 1","pages":"2536643"},"PeriodicalIF":1.5,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144715233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Patient-specific functional liver segments based on centerline classification of the hepatic and portal veins. 基于肝静脉和门静脉中心线分类的患者特异性功能肝段。
IF 1.9 4区 医学 Q3 SURGERY Pub Date : 2025-12-01 Epub Date: 2025-10-31 DOI: 10.1080/24699322.2025.2580307
Gabriella d'Albenzio, Ruoyan Meng, Davit Aghayan, Egidijus Pelanis, Tomas Sakinis, Ole Vegard Solberg, Geir Arne Tangen, Rahul P Kumar, Ole Jakob Elle, Bjørn Edwin, Rafael Palomar

Purpose: Couinaud's liver segment classification has been widely adopted for liver surgery planning, yet its rigid anatomical boundaries often fail to align precisely with individual patient anatomy. This study proposes a novel patient-specific liver segmentation method based on detailed classification of hepatic and portal veins to improve anatomical adherence and clinical relevance.

Methods: Our proposed method involves two key stages: (1) surgeons annotate vascular endpoints on 3D models of hepatic and portal veins, from which vessel centerlines are computed; and (2) liver segments are calculated by assigning voxel labels based on proximity to these vascular centerlines. The accuracy and clinical applicability of our Hepatic and Portal Vein-based Classification (HPVC) were compared with conventional Plane-Based Classification (PBC), Portal Vein-Based Classification (PVC), and an automated deep learning method (nnU-Net) using volumetric measurements, Dice similarity scores, and expert evaluations.

Results: HPVC demonstrated superior anatomical conformity compared to traditional methods, especially in complex segments like 5 and 8, providing segmentations more reflective of actual vascular territories. Volumetric analysis revealed significant discrepancies among the methods, particularly with nnU-Net generally producing larger segment volumes. HPVC consistently achieved higher surgeon-rated scores in patient-specific anatomical adherence, perfusion region assessment, and accuracy in surgical planning compared to PBC, PVC, and nnU-Net.

Conclusion: The presented HPVC method offers substantial improvements in liver segmentation precision, especially relevant for surgical planning in anatomically complex cases. Its integration into clinical workflows via the open-source platform 3D Slicer significantly enhances its accessibility and usability.

目的:Couinaud肝节段分类被广泛应用于肝脏手术规划,但其严格的解剖边界往往不能与患者个体解剖精确一致。本研究提出了一种基于肝静脉和门静脉详细分类的新型患者特异性肝脏分割方法,以提高解剖粘附性和临床相关性。方法:我们提出的方法包括两个关键阶段:(1)外科医生在肝静脉和门静脉的三维模型上标注血管端点,从中计算血管中心线;(2)通过分配基于这些血管中心线接近度的体素标签来计算肝段。我们的基于肝脏和门静脉的分类(HPVC)的准确性和临床适用性与传统的基于平面的分类(PBC)、基于门静脉的分类(PVC)和使用体积测量、Dice相似度评分和专家评估的自动深度学习方法(nnU-Net)进行了比较。结果:与传统方法相比,HPVC具有更好的解剖一致性,特别是在复杂的节段,如5和8,提供的节段更能反映实际血管区域。体积分析揭示了方法之间的显著差异,特别是nnU-Net通常产生较大的片段体积。与PBC、PVC和nnU-Net相比,HPVC在患者特异性解剖粘附性、灌注区域评估和手术计划准确性方面始终获得更高的外科评分。结论:HPVC方法在肝分割精度上有显著提高,尤其适用于解剖复杂病例的手术规划。它通过开源平台3D切片器集成到临床工作流程中,显着增强了其可访问性和可用性。
{"title":"Patient-specific functional liver segments based on centerline classification of the hepatic and portal veins.","authors":"Gabriella d'Albenzio, Ruoyan Meng, Davit Aghayan, Egidijus Pelanis, Tomas Sakinis, Ole Vegard Solberg, Geir Arne Tangen, Rahul P Kumar, Ole Jakob Elle, Bjørn Edwin, Rafael Palomar","doi":"10.1080/24699322.2025.2580307","DOIUrl":"https://doi.org/10.1080/24699322.2025.2580307","url":null,"abstract":"<p><strong>Purpose: </strong>Couinaud's liver segment classification has been widely adopted for liver surgery planning, yet its rigid anatomical boundaries often fail to align precisely with individual patient anatomy. This study proposes a novel patient-specific liver segmentation method based on detailed classification of hepatic and portal veins to improve anatomical adherence and clinical relevance.</p><p><strong>Methods: </strong>Our proposed method involves two key stages: (1) surgeons annotate vascular endpoints on 3D models of hepatic and portal veins, from which vessel centerlines are computed; and (2) liver segments are calculated by assigning voxel labels based on proximity to these vascular centerlines. The accuracy and clinical applicability of our Hepatic and Portal Vein-based Classification (HPVC) were compared with conventional Plane-Based Classification (PBC), Portal Vein-Based Classification (PVC), and an automated deep learning method (nnU-Net) using volumetric measurements, Dice similarity scores, and expert evaluations.</p><p><strong>Results: </strong>HPVC demonstrated superior anatomical conformity compared to traditional methods, especially in complex segments like 5 and 8, providing segmentations more reflective of actual vascular territories. Volumetric analysis revealed significant discrepancies among the methods, particularly with nnU-Net generally producing larger segment volumes. HPVC consistently achieved higher surgeon-rated scores in patient-specific anatomical adherence, perfusion region assessment, and accuracy in surgical planning compared to PBC, PVC, and nnU-Net.</p><p><strong>Conclusion: </strong>The presented HPVC method offers substantial improvements in liver segmentation precision, especially relevant for surgical planning in anatomically complex cases. Its integration into clinical workflows <i>via</i> the open-source platform 3D Slicer significantly enhances its accessibility and usability.</p>","PeriodicalId":56051,"journal":{"name":"Computer Assisted Surgery","volume":"30 1","pages":"2580307"},"PeriodicalIF":1.9,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145423537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient computer vision pipeline for automated anesthetic injection documentation. 高效的计算机视觉管道自动麻醉注射文件。
IF 1.9 4区 医学 Q3 SURGERY Pub Date : 2025-12-01 Epub Date: 2025-10-29 DOI: 10.1080/24699322.2025.2582020
Amit Nissan, Fadi Mahameed, Sapir Gershov, Aeyal Raz, Shlomi Laufer

This study introduces a novel computer vision approach to automate documentation of anesthetic injection events in the operating room. The objective is to enhance documentation accuracy and reliability by providing precise identification of injection events and anesthetic amounts administered, while addressing stopcock placement variability. We developed a computer vision pipeline tailored for automated anesthetic injection documentation in surgical environments. The pipeline leverages the Segment Anything Model (SAM) for robust syringe segmentation, combined with vector similarity matching for generalization across different syringe sizes and occlusions. This few-shot segmentation strategy ensures generalization while minimizing annotation effort. The pipeline also integrates lightweight methods for motion detection, syringe classification, and volume estimation to ensure quasi-real-time performance. The system was tested on 304 injection events performed by 19 anesthesiologists using syringes of four sizes (3, 5, 10 and 20 ml). The pipeline achieved 100% injection-event detection sensitivity and an overall 86.3% documentation success rate. Volume estimation accuracy varied across syringe sizes, with mean absolute error (MAE) values of 0.10, 0.22, 0.37, and 0.61 ml for 3, 5, 10, and 20 ml syringes, respectively. Results compare favorably to manual measurements, which can have mean percentage errors of 1.4%-18.6%. Runtime optimization ensured quasi-real-time operation, processing each event within 10-12 s, supporting clinical workflow integration. This work presents a solution to significantly improve anesthetic injection documentation while enhancing patient safety, standardizing procedures, and reducing anesthesiologists' workload, representing a fully automated, camera-only pipeline validated on clinicians in quasi-real-time.

本研究介绍了一种新颖的计算机视觉方法来自动记录手术室麻醉注射事件。目的是通过提供注射事件和麻醉剂量的精确识别来提高记录的准确性和可靠性,同时解决旋塞放置的可变性。我们开发了一种计算机视觉管道,专门用于手术环境下的自动麻醉注射记录。该管道利用分段任意模型(SAM)进行稳健的注射器分割,并结合向量相似性匹配进行不同注射器尺寸和闭塞度的泛化。这种少镜头分割策略确保了泛化,同时最大限度地减少了注释工作。该管道还集成了轻量级的运动检测、注射器分类和体积估计方法,以确保准实时性能。该系统在19名麻醉师使用4种尺寸(3、5、10和20 ml)的注射器进行的304次注射事件中进行了测试。该管道实现了100%的注入事件检测灵敏度,总体记录成功率为86.3%。体积估计精度因注射器尺寸而异,3、5、10和20 ml注射器的平均绝对误差(MAE)分别为0.10、0.22、0.37和0.61 ml。结果与人工测量相比更有利,人工测量的平均百分比误差为1.4%-18.6%。运行时优化保证了准实时操作,每个事件在10-12秒内处理,支持临床工作流程集成。这项工作提出了一种解决方案,可以显著改善麻醉注射文件,同时提高患者安全性,标准化程序,减少麻醉医生的工作量,代表了一种全自动的、仅摄像头的流水线,在临床医生上进行了准实时验证。
{"title":"Efficient computer vision pipeline for automated anesthetic injection documentation.","authors":"Amit Nissan, Fadi Mahameed, Sapir Gershov, Aeyal Raz, Shlomi Laufer","doi":"10.1080/24699322.2025.2582020","DOIUrl":"https://doi.org/10.1080/24699322.2025.2582020","url":null,"abstract":"<p><p>This study introduces a novel computer vision approach to automate documentation of anesthetic injection events in the operating room. The objective is to enhance documentation accuracy and reliability by providing precise identification of injection events and anesthetic amounts administered, while addressing stopcock placement variability. We developed a computer vision pipeline tailored for automated anesthetic injection documentation in surgical environments. The pipeline leverages the Segment Anything Model (SAM) for robust syringe segmentation, combined with vector similarity matching for generalization across different syringe sizes and occlusions. This few-shot segmentation strategy ensures generalization while minimizing annotation effort. The pipeline also integrates lightweight methods for motion detection, syringe classification, and volume estimation to ensure quasi-real-time performance. The system was tested on 304 injection events performed by 19 anesthesiologists using syringes of four sizes (3, 5, 10 and 20 ml). The pipeline achieved 100% injection-event detection sensitivity and an overall 86.3% documentation success rate. Volume estimation accuracy varied across syringe sizes, with mean absolute error (MAE) values of 0.10, 0.22, 0.37, and 0.61 ml for 3, 5, 10, and 20 ml syringes, respectively. Results compare favorably to manual measurements, which can have mean percentage errors of 1.4%-18.6%. Runtime optimization ensured quasi-real-time operation, processing each event within 10-12 s, supporting clinical workflow integration. This work presents a solution to significantly improve anesthetic injection documentation while enhancing patient safety, standardizing procedures, and reducing anesthesiologists' workload, representing a fully automated, camera-only pipeline validated on clinicians in quasi-real-time.</p>","PeriodicalId":56051,"journal":{"name":"Computer Assisted Surgery","volume":"30 1","pages":"2582020"},"PeriodicalIF":1.9,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145402900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computer Assisted Surgery
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1