Purpose: To develop a deep learning model based on CT bone window images to enhance the accuracy of early diagnosis of spinal tuberculosis.
Methods: This study adopted multicenter retrospective data (n = 1027). Firstly, the vertebral body region of the spine was extracted through the U-Net segmentation model. Then, the segmented images were input into the improved ResNet50 network. Combined with the CT bone window gradient attention mechanism, an end-to-end deep learning diagnostic model was constructed.
Results: In the internal validation datasets, the model achieved an AUC of 0.920, accuracy of 0.874 and sensitivity of 0.876. For External test datasets 1, the AUC was 0.867, accuracy 0.801 and sensitivity 0.794; for External test datasets 2, the AUC was 0.866, accuracy 0.769, and sensitivity 0.883; and for External test datasets 3, the AUC was 0.941, accuracy 0.843 and sensitivity 0.790.
Conclusion: The multi-center study built up a deep learning model for spinal tuberculosis diagnosis with the assist of the CT bone window gradient attention mechanism. The model achieved a good internal verification ability (AUC = 0.920, accuracy rate = 0.874) and external verification ability (AUC = 0.866-0.941, accuracy rate = 0.769-0.843) which showed the wide applicability of the model to different medical institutions. The main developments of this work are the good performances for features that extract relevant information about trabecular micro-fractures and calcification contours' gradients.
{"title":"Deep learning diagnosis model of spinal tuberculosis based on CT bone window gradient attention mechanism: multi-center study.","authors":"Sen Mo, Chong Liu, Jiang Xue, Jiarui Chen, Hao Li, Zhaojun Lu, Zhongxian Zhou, Xiaopeng Qin, Rongqing He, Boli Qin, Yahui Huang, Wei Wei, Xinli Zhan","doi":"10.1080/24699322.2025.2599329","DOIUrl":"10.1080/24699322.2025.2599329","url":null,"abstract":"<p><strong>Purpose: </strong>To develop a deep learning model based on CT bone window images to enhance the accuracy of early diagnosis of spinal tuberculosis.</p><p><strong>Methods: </strong>This study adopted multicenter retrospective data (<i>n</i> = 1027). Firstly, the vertebral body region of the spine was extracted through the U-Net segmentation model. Then, the segmented images were input into the improved ResNet50 network. Combined with the CT bone window gradient attention mechanism, an end-to-end deep learning diagnostic model was constructed.</p><p><strong>Results: </strong>In the internal validation datasets, the model achieved an AUC of 0.920, accuracy of 0.874 and sensitivity of 0.876. For External test datasets 1, the AUC was 0.867, accuracy 0.801 and sensitivity 0.794; for External test datasets 2, the AUC was 0.866, accuracy 0.769, and sensitivity 0.883; and for External test datasets 3, the AUC was 0.941, accuracy 0.843 and sensitivity 0.790.</p><p><strong>Conclusion: </strong>The multi-center study built up a deep learning model for spinal tuberculosis diagnosis with the assist of the CT bone window gradient attention mechanism. The model achieved a good internal verification ability (AUC = 0.920, accuracy rate = 0.874) and external verification ability (AUC = 0.866-0.941, accuracy rate = 0.769-0.843) which showed the wide applicability of the model to different medical institutions. The main developments of this work are the good performances for features that extract relevant information about trabecular micro-fractures and calcification contours' gradients.</p>","PeriodicalId":56051,"journal":{"name":"Computer Assisted Surgery","volume":"31 1","pages":"2599329"},"PeriodicalIF":1.9,"publicationDate":"2026-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145851838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-12-01Epub Date: 2026-01-16DOI: 10.1080/24699322.2026.2615212
Linpeng Ge, Long Wang, Leilei Duan, Taiyuan Zhang
Complex pelvic fractures are infamously challenging to fix surgically because of their fine anatomy and proximity to vital neurovascular structures. Traditional open reduction and internal fixation (ORIF) improves stability but is complicated by excessive blood loss, longer operative time, and morbidity. Robotic-assisted surgical methods, i.e. Robot-Assisted Fracture Reduction (RAFR) and the TiRobot platform, provide a paradigm shift toward precise, minimally invasive fracture reduction and fixation. The RAFR system blends preoperative high-definition 3D CT imaging with intraoperative cone-beam CT and real-time navigation for dynamic visualization and accurate fragment control to eliminate guesswork and minimize the risk of malposition. Its cutting-edge robotic arm, electrically actuated holding devices, and elastic counterforces of traction ensure controlled and safe fracture reduction with soft tissue and neurovascular integrity preservation. Robot-assisted support is assisted by extensive clinical evidence to enhance the accuracy of surgery with sub-millimeter positioning discrepancies, reduce intraoperative blood loss, reduce exposure to radiation, reduce operative and hospital stay times, and enhance functional restoration according to scores demonstrated. Robot over conventional techniques reduces postoperative infection, implant loosening, nonunion, and nerve or vessel injury. TiRobot enhances fixation using artificial intelligence-assisted screw path planning and navigation. Albeit promising, it has limitations in adoption, such as being costly, having no feedback, and a high learning curve. More multicenter randomized clinical trials are required to estimate long-term efficacy, safety, and cost-effectiveness. Robot-assisted pelvic fracture surgery is a leading-edge development that has the ability to improve patient outcomes and the delivery of trauma care.
{"title":"Rebuilding the pelvis: advances in robotic-assisted management of complex pelvic fractures.","authors":"Linpeng Ge, Long Wang, Leilei Duan, Taiyuan Zhang","doi":"10.1080/24699322.2026.2615212","DOIUrl":"https://doi.org/10.1080/24699322.2026.2615212","url":null,"abstract":"<p><p>Complex pelvic fractures are infamously challenging to fix surgically because of their fine anatomy and proximity to vital neurovascular structures. Traditional open reduction and internal fixation (ORIF) improves stability but is complicated by excessive blood loss, longer operative time, and morbidity. Robotic-assisted surgical methods, i.e. Robot-Assisted Fracture Reduction (RAFR) and the TiRobot platform, provide a paradigm shift toward precise, minimally invasive fracture reduction and fixation. The RAFR system blends preoperative high-definition 3D CT imaging with intraoperative cone-beam CT and real-time navigation for dynamic visualization and accurate fragment control to eliminate guesswork and minimize the risk of malposition. Its cutting-edge robotic arm, electrically actuated holding devices, and elastic counterforces of traction ensure controlled and safe fracture reduction with soft tissue and neurovascular integrity preservation. Robot-assisted support is assisted by extensive clinical evidence to enhance the accuracy of surgery with sub-millimeter positioning discrepancies, reduce intraoperative blood loss, reduce exposure to radiation, reduce operative and hospital stay times, and enhance functional restoration according to scores demonstrated. Robot over conventional techniques reduces postoperative infection, implant loosening, nonunion, and nerve or vessel injury. TiRobot enhances fixation using artificial intelligence-assisted screw path planning and navigation. Albeit promising, it has limitations in adoption, such as being costly, having no feedback, and a high learning curve. More multicenter randomized clinical trials are required to estimate long-term efficacy, safety, and cost-effectiveness. Robot-assisted pelvic fracture surgery is a leading-edge development that has the ability to improve patient outcomes and the delivery of trauma care.</p>","PeriodicalId":56051,"journal":{"name":"Computer Assisted Surgery","volume":"31 1","pages":"2615212"},"PeriodicalIF":1.9,"publicationDate":"2026-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145991672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-12-01Epub Date: 2025-12-27DOI: 10.1080/24699322.2025.2604610
Hongbin Wu, Lei Zheng, Tonghai Xu, Bin Zhao, Huawu Yang
Colorectal cancer represents a major global health concern and obesity adds complicates its surgical management. This meta-analysis aimed to evaluate the comparative effectiveness and safety of robotic-assisted surgery and standard laparoscopic surgery in obese colorectal cancer patients. A comprehensive literature search performed across databases from inception to April 2024. Pooled estimates included hospital stay duration, drainage tube removal time, first ventilation time, complication rates, re-admission rates and re-operative rates. Six studies involving 4215 patients were included. Robotic-assisted surgery was associated with a statistically significant but modest reduction in hospital stay time compared to laparoscopic surgery (p = 0.02). No significant differences were found for drainage tube removal time (p = 0.42) and first ventilation time (p = 0.27). Complication rates (OR [odds ratio] = 0.92, 95% confidence interval [CI]: 0.74 to 1.13, p = 0.41), re-admission rates (OR = 0.81, 95% CI: 0.31 to 2.13, p = 0.67) and re-operative rates (OR = 1.20, 95% CI: 0.77 to 1.86, p = 0.41) did not significantly differ between surgical approaches. Robotic-assisted surgery significantly provides a modest reduction in hospital stay duration without compromising patient safety for obese colorectal cancer patients. These findings should be interpreted with caution. Future randomized controlled trials are required to confirm these results.
{"title":"Robotic-assisted versus standard laparoscopic surgery for colorectal cancer in obese patients: a systematic review and meta-analysis.","authors":"Hongbin Wu, Lei Zheng, Tonghai Xu, Bin Zhao, Huawu Yang","doi":"10.1080/24699322.2025.2604610","DOIUrl":"10.1080/24699322.2025.2604610","url":null,"abstract":"<p><p>Colorectal cancer represents a major global health concern and obesity adds complicates its surgical management. This meta-analysis aimed to evaluate the comparative effectiveness and safety of robotic-assisted surgery and standard laparoscopic surgery in obese colorectal cancer patients. A comprehensive literature search performed across databases from inception to April 2024. Pooled estimates included hospital stay duration, drainage tube removal time, first ventilation time, complication rates, re-admission rates and re-operative rates. Six studies involving 4215 patients were included. Robotic-assisted surgery was associated with a statistically significant but modest reduction in hospital stay time compared to laparoscopic surgery (<i>p</i> = 0.02). No significant differences were found for drainage tube removal time (<i>p</i> = 0.42) and first ventilation time (<i>p</i> = 0.27). Complication rates (OR [odds ratio] = 0.92, 95% confidence interval [CI]: 0.74 to 1.13, <i>p</i> = 0.41), re-admission rates (OR = 0.81, 95% CI: 0.31 to 2.13, <i>p</i> = 0.67) and re-operative rates (OR = 1.20, 95% CI: 0.77 to 1.86, <i>p</i> = 0.41) did not significantly differ between surgical approaches. Robotic-assisted surgery significantly provides a modest reduction in hospital stay duration without compromising patient safety for obese colorectal cancer patients. These findings should be interpreted with caution. Future randomized controlled trials are required to confirm these results.</p>","PeriodicalId":56051,"journal":{"name":"Computer Assisted Surgery","volume":"31 1","pages":"2604610"},"PeriodicalIF":1.9,"publicationDate":"2026-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145844299","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-12-01Epub Date: 2025-12-21DOI: 10.1080/24699322.2025.2605494
Halil Burak Mutu
Plate and screw fixation is a widely used method in the surgical treatment of femoral shaft fractures; however, mechanical performance may vary depending on implant material, fracture gap size and loading conditions. This study aimed to investigate the biomechanical behavior of femoral shaft fractures stabilized with plate and screw fixation by applying finite element analysis (FEA) and to evaluate the predictive performance of machine learning (ML) algorithms based on numerical results. Three different fracture gap sizes (1, 2,and 3 mm) were modeled on a femur geometry, and axial loads ranging from 400 N to 1200 N (in 100 N increments) were applied. Two implant materials, Ti-6Al-4V and 316 L stainless steel (SS), were assessed. The stress distribution on the plate and first screw and the displacements at the femoral head and fracture site were analyzed using two different mesh densities. Subsequently, ML algorithms including Decision Tree (DT), Multilayer Perceptron (MLP) and Support Vector Machine (SVM) were used to predict the stress and displacement values based on the numerical dataset. The finer mesh provided more accurate results. Ti-6Al-4V showed lower von Mises stress values and displacement magnitudes compared to 316 L SS. Among the ML methods, MLP and SVM demonstrated better prediction accuracy than DT. The integration of FEA and ML techniques enables efficient prediction of implant biomechanics, offering a promising approach for preclinical evaluation and optimization of orthopedic fixation systems.
钢板螺钉固定是股骨干骨折手术治疗中广泛采用的方法;然而,机械性能可能因种植体材料、断裂间隙大小和加载条件而异。本研究旨在通过有限元分析(FEA)研究经钢板和螺钉固定稳定的股骨干骨折的生物力学行为,并基于数值结果评估机器学习(ML)算法的预测性能。在股骨几何模型上模拟了三种不同的骨折间隙尺寸(1、2和3 mm),并施加了400至1200牛的轴向载荷(以100牛为增量)。两种种植体材料,Ti-6Al-4V和316l不锈钢(SS)进行评估。采用两种不同的网格密度分析钢板和第一颗螺钉的应力分布以及股骨头和骨折部位的位移。随后,利用决策树(DT)、多层感知器(MLP)和支持向量机(SVM)等机器学习算法对数值数据集进行应力和位移预测。网格越细,结果越准确。与316 L SS相比,Ti-6Al-4V具有更低的von Mises应力值和位移幅度。在ML方法中,MLP和SVM的预测精度优于DT。FEA和ML技术的集成能够有效地预测植入物的生物力学,为骨科固定系统的临床前评估和优化提供了一种有前途的方法。
{"title":"Multi-factorial biomechanical evaluation of plate-screw fixation in femoral shaft fractures using numerical and machine learning approaches.","authors":"Halil Burak Mutu","doi":"10.1080/24699322.2025.2605494","DOIUrl":"https://doi.org/10.1080/24699322.2025.2605494","url":null,"abstract":"<p><p>Plate and screw fixation is a widely used method in the surgical treatment of femoral shaft fractures; however, mechanical performance may vary depending on implant material, fracture gap size and loading conditions. This study aimed to investigate the biomechanical behavior of femoral shaft fractures stabilized with plate and screw fixation by applying finite element analysis (FEA) and to evaluate the predictive performance of machine learning (ML) algorithms based on numerical results. Three different fracture gap sizes (1, 2,and 3 mm) were modeled on a femur geometry, and axial loads ranging from 400 N to 1200 N (in 100 N increments) were applied. Two implant materials, Ti-6Al-4V and 316 L stainless steel (SS), were assessed. The stress distribution on the plate and first screw and the displacements at the femoral head and fracture site were analyzed using two different mesh densities. Subsequently, ML algorithms including Decision Tree (DT), Multilayer Perceptron (MLP) and Support Vector Machine (SVM) were used to predict the stress and displacement values based on the numerical dataset. The finer mesh provided more accurate results. Ti-6Al-4V showed lower von Mises stress values and displacement magnitudes compared to 316 L SS. Among the ML methods, MLP and SVM demonstrated better prediction accuracy than DT. The integration of FEA and ML techniques enables efficient prediction of implant biomechanics, offering a promising approach for preclinical evaluation and optimization of orthopedic fixation systems.</p>","PeriodicalId":56051,"journal":{"name":"Computer Assisted Surgery","volume":"31 1","pages":"2605494"},"PeriodicalIF":1.9,"publicationDate":"2026-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145806169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-12-01Epub Date: 2026-01-10DOI: 10.1080/24699322.2026.2614532
Mert Ersan, Hasan Demirbaşoğlu, Begüm Kolcu, Hilal Aybüke Yıldız
Background: Three-dimensional (3D) simulation and virtual reality (VR) technologies are increasingly used in aesthetic surgery consultations to enhance decision-making and expectation management. However, their impact on surgical decision-making and postoperative satisfaction across different procedures remains unclear.
Objectives: This study aimed to evaluate the influence of 3D simulation and VR technology in patients undergoing rhinoplasty, breast augmentation, mastopexy, augmentation-mastopexy and breast reduction.
Methods: A retrospective study was conducted with 75 female patients who underwent primary aesthetic surgery. Preoperative 3D simulations and VR visualizations were generated using the Crisalix Virtual Esthetics system (Crisalix S.A., Switzerland). Patients were assessed postoperatively at one year using structured surveys to evaluate the influence of 3D simulation and VR technology on their decision-making and satisfaction. Statistical analyses included the Kruskal-Wallis H test, the Chi-Square test, and Spearman's correlation.
Results: 3D simulation had the greatest influence on breast augmentation (8.4/10), rhinoplasty (7.6/10), and augmentation-mastopexy (7.1/10) patients but was less impactful for mastopexy (6.6/10) and breast reduction (3.8/10) patients (p < 0.001). The most decisive factors were previous patient photos (30.7%) and communication with the surgeon (29.3%), with simulation ranking third (18.7%). Postoperative similarity ratings were highest in breast augmentation (7.9/10) and rhinoplasty (7.5/10) patients. While 70.7% of patients would recommend 3D simulation, VR headset use did not influence decisions (p < 0.001).
Conclusions: 3D simulation enhances patient engagement and expectation management across various aesthetic procedures. While its influence is more significant in surgeries primarily focused on aesthetic outcomes, it serves as a complementary tool rather than a definitive factor in decision-making.
{"title":"The impact of three-dimensional simulation and virtual reality technologies on surgical decision-making and postoperative satisfaction in aesthetic surgery: a preliminary study.","authors":"Mert Ersan, Hasan Demirbaşoğlu, Begüm Kolcu, Hilal Aybüke Yıldız","doi":"10.1080/24699322.2026.2614532","DOIUrl":"https://doi.org/10.1080/24699322.2026.2614532","url":null,"abstract":"<p><strong>Background: </strong>Three-dimensional (3D) simulation and virtual reality (VR) technologies are increasingly used in aesthetic surgery consultations to enhance decision-making and expectation management. However, their impact on surgical decision-making and postoperative satisfaction across different procedures remains unclear.</p><p><strong>Objectives: </strong>This study aimed to evaluate the influence of 3D simulation and VR technology in patients undergoing rhinoplasty, breast augmentation, mastopexy, augmentation-mastopexy and breast reduction.</p><p><strong>Methods: </strong>A retrospective study was conducted with 75 female patients who underwent primary aesthetic surgery. Preoperative 3D simulations and VR visualizations were generated using the Crisalix Virtual Esthetics system (Crisalix S.A., Switzerland). Patients were assessed postoperatively at one year using structured surveys to evaluate the influence of 3D simulation and VR technology on their decision-making and satisfaction. Statistical analyses included the Kruskal-Wallis H test, the Chi-Square test, and Spearman's correlation.</p><p><strong>Results: </strong>3D simulation had the greatest influence on breast augmentation (8.4/10), rhinoplasty (7.6/10), and augmentation-mastopexy (7.1/10) patients but was less impactful for mastopexy (6.6/10) and breast reduction (3.8/10) patients (<i>p</i> < 0.001). The most decisive factors were previous patient photos (30.7%) and communication with the surgeon (29.3%), with simulation ranking third (18.7%). Postoperative similarity ratings were highest in breast augmentation (7.9/10) and rhinoplasty (7.5/10) patients. While 70.7% of patients would recommend 3D simulation, VR headset use did not influence decisions (<i>p</i> < 0.001).</p><p><strong>Conclusions: </strong>3D simulation enhances patient engagement and expectation management across various aesthetic procedures. While its influence is more significant in surgeries primarily focused on aesthetic outcomes, it serves as a complementary tool rather than a definitive factor in decision-making.</p>","PeriodicalId":56051,"journal":{"name":"Computer Assisted Surgery","volume":"31 1","pages":"2614532"},"PeriodicalIF":1.9,"publicationDate":"2026-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145949422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-12-01Epub Date: 2025-12-31DOI: 10.1080/24699322.2025.2604123
Laura Connolly, Hyunwoo Song, Keshuai Xu, Anton Deguet, Simon Leonard, Gabor Fichtinger, Parvin Mousavi, Russell H Taylor, Emad Boctor
Cancer resection surgery is unsuccessful if tumor tissue is left behind in the surgical cavity. Identifying the residual cancer requires additional imaging or postoperative histological analysis. Photoacoustic imaging can be used to image both the surface and depths of the resection cavity; however, its performance hinges on consistent probe placement and stable acoustic and optical coupling. As intra-cavity deployment of photoacoustic imaging is largely uncharted, several potential embodiments warrant rigorous investigation. We address this need with an open-source robotic testbed for intraoperative tumor-bed inspection using photoacoustic imaging. The platform integrates the da Vinci Research Kit, depth imaging, and electromagnetic tracking to automate cavity scanning and maintain repeatable probe trajectories. Using tissue-mimicking phantoms, we (i) demonstrate a novel imaging embodiment for photoacoustic tumor-bed inspection and (ii) show how this testbed can be used to investigate and optimize tumor bed inspection strategies and configurations. This study establishes the feasibility of detecting and mapping residual cancer within a simulated surgical cavity. The primary contribution is the testbed itself, designed for integration with existing surgical navigation workflows and rapid prototyping. This testbed serves as an essential foundation for systematic evaluation of photoacoustic, robot-assisted strategies for improving intraoperative margin assessment.
{"title":"No cancer left behind: a testbed and demonstration of concept for photoacoustic tumor bed inspection.","authors":"Laura Connolly, Hyunwoo Song, Keshuai Xu, Anton Deguet, Simon Leonard, Gabor Fichtinger, Parvin Mousavi, Russell H Taylor, Emad Boctor","doi":"10.1080/24699322.2025.2604123","DOIUrl":"https://doi.org/10.1080/24699322.2025.2604123","url":null,"abstract":"<p><p>Cancer resection surgery is unsuccessful if tumor tissue is left behind in the surgical cavity. Identifying the residual cancer requires additional imaging or postoperative histological analysis. Photoacoustic imaging can be used to image both the surface and depths of the resection cavity; however, its performance hinges on consistent probe placement and stable acoustic and optical coupling. As intra-cavity deployment of photoacoustic imaging is largely uncharted, several potential embodiments warrant rigorous investigation. We address this need with an open-source robotic testbed for intraoperative tumor-bed inspection using photoacoustic imaging. The platform integrates the da Vinci Research Kit, depth imaging, and electromagnetic tracking to automate cavity scanning and maintain repeatable probe trajectories. Using tissue-mimicking phantoms, we (i) demonstrate a novel imaging embodiment for photoacoustic tumor-bed inspection and (ii) show how this testbed can be used to investigate and optimize tumor bed inspection strategies and configurations. This study establishes the feasibility of detecting and mapping residual cancer within a simulated surgical cavity. The primary contribution is the testbed itself, designed for integration with existing surgical navigation workflows and rapid prototyping. This testbed serves as an essential foundation for systematic evaluation of photoacoustic, robot-assisted strategies for improving intraoperative margin assessment.</p>","PeriodicalId":56051,"journal":{"name":"Computer Assisted Surgery","volume":"31 1","pages":"2604123"},"PeriodicalIF":1.9,"publicationDate":"2026-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145879506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-12-01Epub Date: 2025-12-26DOI: 10.1080/24699322.2025.2597553
Masuda Begum Sampa, Nor Hidayati Abdul Aziz, Md Siddikur Rahman, Nor Azlina Ab Aziz, Rosli Besar, Anith Khairunnisa Ghazali
Reinforcement learning (RL) has emerged as a powerful artificial intelligence paradigm in medical image analysis, excelling in complex decision-making tasks. This systematic review synthesizes the applications of RL across diverse imaging domains-including landmark detection, image segmentation, lesion identification, disease diagnosis, and image registration-by analyzing 20 peer-reviewed studies published between 2019 and 2023. RL methods are categorized into classical and deep reinforcement learning (DRL) approaches, focusing on their performance, integration with other machine learning models, and clinical utility. Deep Q-Networks (DQN) demonstrated strong performance in anatomical landmark detection and cardiovascular risk estimation, while Proximal Policy Optimization (PPO) and Advantage Actor-Critic (A2C) achieved optimal policy learning for vessel tracking. Policy gradient methods such as REINFORCE, Twin-Delayed Deep Deterministic Policy Gradient (TD3), and Soft Actor-Critic (SAC) were successfully applied to breast lesion detection, white-matter connectivity analysis, and vertebral segmentation.Monte Carlo learning, meta-RL, and A3C methods proved effective for adaptive questioning, image quality evaluation, and multimodal image registration. To consolidate these findings, we propose a unified Reinforcement Learning Medical Imaging (RLMI) framework encompassing four core components: state representation, policy optimization, reward formulation, and environment modeling. This framework enhances sequential agent learning, stabilizes navigation, and generalizes across imaging modalities and tasks. Key challenges remain, including optimizing task-specific policies, integrating anatomical contexts, addressing data scarcity, and improving interpretability. This review highlights RL's potential to enhance accuracy, adaptability, and efficiency in medical image analysis, providing valuable guidance for researchers and clinicians applying RL in real-world healthcare settings.
{"title":"Reinforcement learning for medical image analysis: a systematic review of algorithms, engineering challenges, and clinical deployment.","authors":"Masuda Begum Sampa, Nor Hidayati Abdul Aziz, Md Siddikur Rahman, Nor Azlina Ab Aziz, Rosli Besar, Anith Khairunnisa Ghazali","doi":"10.1080/24699322.2025.2597553","DOIUrl":"10.1080/24699322.2025.2597553","url":null,"abstract":"<p><p>Reinforcement learning (RL) has emerged as a powerful artificial intelligence paradigm in medical image analysis, excelling in complex decision-making tasks. This systematic review synthesizes the applications of RL across diverse imaging domains-including landmark detection, image segmentation, lesion identification, disease diagnosis, and image registration-by analyzing 20 peer-reviewed studies published between 2019 and 2023. RL methods are categorized into classical and deep reinforcement learning (DRL) approaches, focusing on their performance, integration with other machine learning models, and clinical utility. Deep Q-Networks (DQN) demonstrated strong performance in anatomical landmark detection and cardiovascular risk estimation, while Proximal Policy Optimization (PPO) and Advantage Actor-Critic (A2C) achieved optimal policy learning for vessel tracking. Policy gradient methods such as REINFORCE, Twin-Delayed Deep Deterministic Policy Gradient (TD3), and Soft Actor-Critic (SAC) were successfully applied to breast lesion detection, white-matter connectivity analysis, and vertebral segmentation.Monte Carlo learning, meta-RL, and A3C methods proved effective for adaptive questioning, image quality evaluation, and multimodal image registration. To consolidate these findings, we propose a unified Reinforcement Learning Medical Imaging (RLMI) framework encompassing four core components: state representation, policy optimization, reward formulation, and environment modeling. This framework enhances sequential agent learning, stabilizes navigation, and generalizes across imaging modalities and tasks. Key challenges remain, including optimizing task-specific policies, integrating anatomical contexts, addressing data scarcity, and improving interpretability. This review highlights RL's potential to enhance accuracy, adaptability, and efficiency in medical image analysis, providing valuable guidance for researchers and clinicians applying RL in real-world healthcare settings.</p>","PeriodicalId":56051,"journal":{"name":"Computer Assisted Surgery","volume":"31 1","pages":"2597553"},"PeriodicalIF":1.9,"publicationDate":"2026-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145835369","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recent success in generative AI has demonstrated great potential in various medical scenarios. However, how to generate realistic and high-fidelity gastrointestinal laparoscopy videos still lacks exploration. A recent work, Endora, proposes a basic generation model for a gastrointestinal laparoscopy scenario, producing low-resolution laparoscopy videos, which can not meet the real needs in robotic surgery. Regarding this issue, we propose an innovative two-stage video generation architecture HiEndo for generating high-resolution gastrointestinal laparoscopy videos with high fidelity. In the first stage, we build a diffusion transformer for generating a low-resolution laparoscopy video upon the basic capability of Endora as an initial start. In the second stage, we further design a super resolution module to improve the resolution of initial video and refine the fine-grained details. With these two stages, we could obtain high-resolution realistic laparoscopy videos with high fidelity, which can meet the real-world clinical usage. We also collect a large-scale gastrointestinal laparoscopy video dataset with 61,270 video clips for training and validation of our proposed method. Extensive experimental results have demonstrate the effectiveness of our proposed framework. For example, our model achieves 15.1% Fréchet Video Distance and 3.7% F1 score improvements compared with the previous state-of-the-art method.
{"title":"HiEndo: harnessing large-scale data for generating high-resolution laparoscopy videos under a two-stage framework.","authors":"Zhao Wang, Yeqian Zhang, Jiayi Gu, Yueyao Chen, Yonghao Long, Xiang Xia, Puhua Zhang, Chunchao Zhu, Zerui Wang, Qi Dou, Zheng Wang, Zizhen Zhang","doi":"10.1080/24699322.2025.2536643","DOIUrl":"https://doi.org/10.1080/24699322.2025.2536643","url":null,"abstract":"<p><p>Recent success in generative AI has demonstrated great potential in various medical scenarios. However, how to generate realistic and high-fidelity gastrointestinal laparoscopy videos still lacks exploration. A recent work, Endora, proposes a basic generation model for a gastrointestinal laparoscopy scenario, producing low-resolution laparoscopy videos, which can not meet the real needs in robotic surgery. Regarding this issue, we propose an innovative two-stage video generation architecture HiEndo for generating high-resolution gastrointestinal laparoscopy videos with high fidelity. In the first stage, we build a diffusion transformer for generating a low-resolution laparoscopy video upon the basic capability of Endora as an initial start. In the second stage, we further design a super resolution module to improve the resolution of initial video and refine the fine-grained details. With these two stages, we could obtain high-resolution realistic laparoscopy videos with high fidelity, which can meet the real-world clinical usage. We also collect a large-scale gastrointestinal laparoscopy video dataset with 61,270 video clips for training and validation of our proposed method. Extensive experimental results have demonstrate the effectiveness of our proposed framework. For example, our model achieves 15.1% Fréchet Video Distance and 3.7% F1 score improvements compared with the previous state-of-the-art method.</p>","PeriodicalId":56051,"journal":{"name":"Computer Assisted Surgery","volume":"30 1","pages":"2536643"},"PeriodicalIF":1.5,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144715233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-10-31DOI: 10.1080/24699322.2025.2580307
Gabriella d'Albenzio, Ruoyan Meng, Davit Aghayan, Egidijus Pelanis, Tomas Sakinis, Ole Vegard Solberg, Geir Arne Tangen, Rahul P Kumar, Ole Jakob Elle, Bjørn Edwin, Rafael Palomar
Purpose: Couinaud's liver segment classification has been widely adopted for liver surgery planning, yet its rigid anatomical boundaries often fail to align precisely with individual patient anatomy. This study proposes a novel patient-specific liver segmentation method based on detailed classification of hepatic and portal veins to improve anatomical adherence and clinical relevance.
Methods: Our proposed method involves two key stages: (1) surgeons annotate vascular endpoints on 3D models of hepatic and portal veins, from which vessel centerlines are computed; and (2) liver segments are calculated by assigning voxel labels based on proximity to these vascular centerlines. The accuracy and clinical applicability of our Hepatic and Portal Vein-based Classification (HPVC) were compared with conventional Plane-Based Classification (PBC), Portal Vein-Based Classification (PVC), and an automated deep learning method (nnU-Net) using volumetric measurements, Dice similarity scores, and expert evaluations.
Results: HPVC demonstrated superior anatomical conformity compared to traditional methods, especially in complex segments like 5 and 8, providing segmentations more reflective of actual vascular territories. Volumetric analysis revealed significant discrepancies among the methods, particularly with nnU-Net generally producing larger segment volumes. HPVC consistently achieved higher surgeon-rated scores in patient-specific anatomical adherence, perfusion region assessment, and accuracy in surgical planning compared to PBC, PVC, and nnU-Net.
Conclusion: The presented HPVC method offers substantial improvements in liver segmentation precision, especially relevant for surgical planning in anatomically complex cases. Its integration into clinical workflows via the open-source platform 3D Slicer significantly enhances its accessibility and usability.
{"title":"Patient-specific functional liver segments based on centerline classification of the hepatic and portal veins.","authors":"Gabriella d'Albenzio, Ruoyan Meng, Davit Aghayan, Egidijus Pelanis, Tomas Sakinis, Ole Vegard Solberg, Geir Arne Tangen, Rahul P Kumar, Ole Jakob Elle, Bjørn Edwin, Rafael Palomar","doi":"10.1080/24699322.2025.2580307","DOIUrl":"https://doi.org/10.1080/24699322.2025.2580307","url":null,"abstract":"<p><strong>Purpose: </strong>Couinaud's liver segment classification has been widely adopted for liver surgery planning, yet its rigid anatomical boundaries often fail to align precisely with individual patient anatomy. This study proposes a novel patient-specific liver segmentation method based on detailed classification of hepatic and portal veins to improve anatomical adherence and clinical relevance.</p><p><strong>Methods: </strong>Our proposed method involves two key stages: (1) surgeons annotate vascular endpoints on 3D models of hepatic and portal veins, from which vessel centerlines are computed; and (2) liver segments are calculated by assigning voxel labels based on proximity to these vascular centerlines. The accuracy and clinical applicability of our Hepatic and Portal Vein-based Classification (HPVC) were compared with conventional Plane-Based Classification (PBC), Portal Vein-Based Classification (PVC), and an automated deep learning method (nnU-Net) using volumetric measurements, Dice similarity scores, and expert evaluations.</p><p><strong>Results: </strong>HPVC demonstrated superior anatomical conformity compared to traditional methods, especially in complex segments like 5 and 8, providing segmentations more reflective of actual vascular territories. Volumetric analysis revealed significant discrepancies among the methods, particularly with nnU-Net generally producing larger segment volumes. HPVC consistently achieved higher surgeon-rated scores in patient-specific anatomical adherence, perfusion region assessment, and accuracy in surgical planning compared to PBC, PVC, and nnU-Net.</p><p><strong>Conclusion: </strong>The presented HPVC method offers substantial improvements in liver segmentation precision, especially relevant for surgical planning in anatomically complex cases. Its integration into clinical workflows <i>via</i> the open-source platform 3D Slicer significantly enhances its accessibility and usability.</p>","PeriodicalId":56051,"journal":{"name":"Computer Assisted Surgery","volume":"30 1","pages":"2580307"},"PeriodicalIF":1.9,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145423537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-10-29DOI: 10.1080/24699322.2025.2582020
Amit Nissan, Fadi Mahameed, Sapir Gershov, Aeyal Raz, Shlomi Laufer
This study introduces a novel computer vision approach to automate documentation of anesthetic injection events in the operating room. The objective is to enhance documentation accuracy and reliability by providing precise identification of injection events and anesthetic amounts administered, while addressing stopcock placement variability. We developed a computer vision pipeline tailored for automated anesthetic injection documentation in surgical environments. The pipeline leverages the Segment Anything Model (SAM) for robust syringe segmentation, combined with vector similarity matching for generalization across different syringe sizes and occlusions. This few-shot segmentation strategy ensures generalization while minimizing annotation effort. The pipeline also integrates lightweight methods for motion detection, syringe classification, and volume estimation to ensure quasi-real-time performance. The system was tested on 304 injection events performed by 19 anesthesiologists using syringes of four sizes (3, 5, 10 and 20 ml). The pipeline achieved 100% injection-event detection sensitivity and an overall 86.3% documentation success rate. Volume estimation accuracy varied across syringe sizes, with mean absolute error (MAE) values of 0.10, 0.22, 0.37, and 0.61 ml for 3, 5, 10, and 20 ml syringes, respectively. Results compare favorably to manual measurements, which can have mean percentage errors of 1.4%-18.6%. Runtime optimization ensured quasi-real-time operation, processing each event within 10-12 s, supporting clinical workflow integration. This work presents a solution to significantly improve anesthetic injection documentation while enhancing patient safety, standardizing procedures, and reducing anesthesiologists' workload, representing a fully automated, camera-only pipeline validated on clinicians in quasi-real-time.
{"title":"Efficient computer vision pipeline for automated anesthetic injection documentation.","authors":"Amit Nissan, Fadi Mahameed, Sapir Gershov, Aeyal Raz, Shlomi Laufer","doi":"10.1080/24699322.2025.2582020","DOIUrl":"https://doi.org/10.1080/24699322.2025.2582020","url":null,"abstract":"<p><p>This study introduces a novel computer vision approach to automate documentation of anesthetic injection events in the operating room. The objective is to enhance documentation accuracy and reliability by providing precise identification of injection events and anesthetic amounts administered, while addressing stopcock placement variability. We developed a computer vision pipeline tailored for automated anesthetic injection documentation in surgical environments. The pipeline leverages the Segment Anything Model (SAM) for robust syringe segmentation, combined with vector similarity matching for generalization across different syringe sizes and occlusions. This few-shot segmentation strategy ensures generalization while minimizing annotation effort. The pipeline also integrates lightweight methods for motion detection, syringe classification, and volume estimation to ensure quasi-real-time performance. The system was tested on 304 injection events performed by 19 anesthesiologists using syringes of four sizes (3, 5, 10 and 20 ml). The pipeline achieved 100% injection-event detection sensitivity and an overall 86.3% documentation success rate. Volume estimation accuracy varied across syringe sizes, with mean absolute error (MAE) values of 0.10, 0.22, 0.37, and 0.61 ml for 3, 5, 10, and 20 ml syringes, respectively. Results compare favorably to manual measurements, which can have mean percentage errors of 1.4%-18.6%. Runtime optimization ensured quasi-real-time operation, processing each event within 10-12 s, supporting clinical workflow integration. This work presents a solution to significantly improve anesthetic injection documentation while enhancing patient safety, standardizing procedures, and reducing anesthesiologists' workload, representing a fully automated, camera-only pipeline validated on clinicians in quasi-real-time.</p>","PeriodicalId":56051,"journal":{"name":"Computer Assisted Surgery","volume":"30 1","pages":"2582020"},"PeriodicalIF":1.9,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145402900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}