首页 > 最新文献

Computer Assisted Surgery最新文献

英文 中文
Three-dimensional image-guided navigation technique for femoral artery puncture. 股动脉穿刺三维图像引导导航技术。
IF 1.9 4区 医学 Q3 SURGERY Pub Date : 2025-12-01 Epub Date: 2025-07-28 DOI: 10.1080/24699322.2025.2535967
Yunmeng Zhang, Shenglin Liu, Qiang Zhang, Qingmin Feng

Percutaneous femoral arterial access is a fundamental procedure in minimally invasive vascular interventions. However, inadequate visualization of the femoral artery may lead to inaccurate puncture and complications, with reported incidence rates of 3 to 18%. This study proposes a three-dimensional (3D) image-guided navigation system designed to enhance real-time visualization of the target vessel and puncture site during femoral artery access. This system employed an Iterative Closest Point (ICP)-based point cloud algorithm to achieve spatial registration between image space and patient space. An improved ICP method is implemented to optimize surface point cloud alignment, providing higher efficiency and accuracy compared to conventional approaches. Validation experiments were conducted using a standard model and a human phantom. Registration and navigation accuracy were quantified using fiducial registration error (FRE) for spatial alignment, target registration error (TRE) for navigation accuracy, and distance error for puncture precision. The system achieved a FRE of 0.944 mm. On the standard model, the average distance error was 0.885 mm, and the TRE was 0.915 mm. On the human phantom, the average distance error is 0.967 mm, and the average TRE is 0.981 mm. These results confirm the feasibility and effectiveness of the proposed 3D navigation system in guiding femoral artery puncture. All error metrics were within clinically acceptable thresholds, suggesting potential for improved procedural safety and precision in percutaneous vascular interventions.

经皮股动脉通路是微创血管介入治疗的基本步骤。然而,股动脉显像不足可能导致穿刺不准确和并发症,据报道发生率为3%至18%。本研究提出了一种三维(3D)图像引导导航系统,旨在增强股动脉进入过程中目标血管和穿刺部位的实时可视化。该系统采用基于迭代最近点(ICP)的点云算法实现图像空间与患者空间的空间配准。提出了一种改进的ICP方法来优化地表点云对齐,与传统方法相比,具有更高的效率和精度。使用标准模型和人体幻影进行验证实验。利用空间对准的基准配准误差(FRE)、导航精度的目标配准误差(TRE)和穿刺精度的距离误差量化配准和导航精度。该系统的FRE为0.944 mm。在标准模型上,平均距离误差为0.885 mm, TRE为0.915 mm。在人体幻影上,平均距离误差为0.967 mm,平均TRE为0.981 mm。这些结果证实了三维导航系统在股动脉穿刺引导中的可行性和有效性。所有的误差指标都在临床可接受的阈值范围内,表明有可能提高经皮血管介入手术的安全性和准确性。
{"title":"Three-dimensional image-guided navigation technique for femoral artery puncture.","authors":"Yunmeng Zhang, Shenglin Liu, Qiang Zhang, Qingmin Feng","doi":"10.1080/24699322.2025.2535967","DOIUrl":"10.1080/24699322.2025.2535967","url":null,"abstract":"<p><p>Percutaneous femoral arterial access is a fundamental procedure in minimally invasive vascular interventions. However, inadequate visualization of the femoral artery may lead to inaccurate puncture and complications, with reported incidence rates of 3 to 18%. This study proposes a three-dimensional (3D) image-guided navigation system designed to enhance real-time visualization of the target vessel and puncture site during femoral artery access. This system employed an Iterative Closest Point (ICP)-based point cloud algorithm to achieve spatial registration between image space and patient space. An improved ICP method is implemented to optimize surface point cloud alignment, providing higher efficiency and accuracy compared to conventional approaches. Validation experiments were conducted using a standard model and a human phantom. Registration and navigation accuracy were quantified using fiducial registration error (FRE) for spatial alignment, target registration error (TRE) for navigation accuracy, and distance error for puncture precision. The system achieved a FRE of 0.944 mm. On the standard model, the average distance error was 0.885 mm, and the TRE was 0.915 mm. On the human phantom, the average distance error is 0.967 mm, and the average TRE is 0.981 mm. These results confirm the feasibility and effectiveness of the proposed 3D navigation system in guiding femoral artery puncture. All error metrics were within clinically acceptable thresholds, suggesting potential for improved procedural safety and precision in percutaneous vascular interventions.</p>","PeriodicalId":56051,"journal":{"name":"Computer Assisted Surgery","volume":"30 1","pages":"2535967"},"PeriodicalIF":1.9,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144735616","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Imageless optical navigation system is clinically valid for total knee arthroplasty. 无图像光学导航系统在全膝关节置换术中是有效的。
IF 1.5 4区 医学 Q3 SURGERY Pub Date : 2025-12-01 Epub Date: 2025-02-16 DOI: 10.1080/24699322.2025.2466424
Taylor B Winberg, Sheila Wang, James L Howard

Achieving optimal implant position and orientation during total knee arthroplasty (TKA) is a pivotal factor in long-term survival. Computer-assisted navigation (CAN) has been recognized as a trusted technology that improves the accuracy and consistency of femoral and tibial bone cuts. Imageless CAN offers advantages over image-based CAN by reducing cost, radiation exposure, and time. The purpose of this study was to evaluate the accuracy of an imageless optical navigation system for TKA in a clinical setting. Forty-two consecutive patients who underwent primary TKA with CAN were retrospectively reviewed. Femoral and tibial component coronal alignment was assessed via post-operative radiographs by two independent reviewers and compared against coronal alignment angles from the CAN. The primary outcome was the mean absolute difference of femoral and tibial varus/valgus angles between radiograph and intra-operative device measurements. Bland-Altman plots were used to assess agreement between the methods and statistically analyze potential systematic bias. The mean absolute differences between navigation-guided cut measurements and post-operative radiographs were 1.16 ± 1.03° and 1.76 ± 1.38° for femoral and tibial alignment respectively. About 88% of coronal measurements were within ±3°, while 99% were within ±5°. Bland-Altman analysis demonstrated a bias between CAN and radiographic measurements with CAN values averaging 0.52° (95% CI: 0.11°-0.93°) less than their paired radiographic measurements. This study demonstrated the ability of an optical imageless navigation system to measure, on average, femoral and tibial coronal cuts to within 2.0° of post-operative radiographic measurements in a clinical setting.

在全膝关节置换术(TKA)中获得最佳的植入物位置和方向是长期生存的关键因素。计算机辅助导航(CAN)已被认为是一种值得信赖的技术,可以提高股骨和胫骨切割的准确性和一致性。与基于图像的CAN相比,无图像CAN具有降低成本、辐射暴露和时间的优势。本研究的目的是评估无图像光学导航系统在临床TKA中的准确性。回顾性分析了42例连续接受原发性TKA合并CAN的患者。由两名独立评论者通过术后x线片评估股骨和胫骨组件冠状位对齐,并与CAN的冠状位对齐角度进行比较。主要结果是x线片和术中装置测量的股骨和胫骨内翻/外翻角的平均绝对差异。Bland-Altman图用于评估方法之间的一致性,并统计分析潜在的系统偏倚。导航引导下的切口测量值与术后x线片的绝对平均差分别为1.16±1.03°和1.76±1.38°。约88%的日冕测量值在±3°范围内,99%的日冕测量值在±5°范围内。Bland-Altman分析表明,CAN值与x射线测量值之间存在偏差,CAN值平均比配对的x射线测量值小0.52°(95% CI: 0.11°-0.93°)。该研究证明了光学无图像导航系统在临床环境中平均测量股骨和胫骨冠状切口的能力,其术后放射测量误差在2.0°以内。
{"title":"Imageless optical navigation system is clinically valid for total knee arthroplasty.","authors":"Taylor B Winberg, Sheila Wang, James L Howard","doi":"10.1080/24699322.2025.2466424","DOIUrl":"10.1080/24699322.2025.2466424","url":null,"abstract":"<p><p>Achieving optimal implant position and orientation during total knee arthroplasty (TKA) is a pivotal factor in long-term survival. Computer-assisted navigation (CAN) has been recognized as a trusted technology that improves the accuracy and consistency of femoral and tibial bone cuts. Imageless CAN offers advantages over image-based CAN by reducing cost, radiation exposure, and time. The purpose of this study was to evaluate the accuracy of an imageless optical navigation system for TKA in a clinical setting. Forty-two consecutive patients who underwent primary TKA with CAN were retrospectively reviewed. Femoral and tibial component coronal alignment was assessed <i>via</i> post-operative radiographs by two independent reviewers and compared against coronal alignment angles from the CAN. The primary outcome was the mean absolute difference of femoral and tibial varus/valgus angles between radiograph and intra-operative device measurements. Bland-Altman plots were used to assess agreement between the methods and statistically analyze potential systematic bias. The mean absolute differences between navigation-guided cut measurements and post-operative radiographs were 1.16 ± 1.03° and 1.76 ± 1.38° for femoral and tibial alignment respectively. About 88% of coronal measurements were within ±3°, while 99% were within ±5°. Bland-Altman analysis demonstrated a bias between CAN and radiographic measurements with CAN values averaging 0.52° (95% CI: 0.11°-0.93°) less than their paired radiographic measurements. This study demonstrated the ability of an optical imageless navigation system to measure, on average, femoral and tibial coronal cuts to within 2.0° of post-operative radiographic measurements in a clinical setting.</p>","PeriodicalId":56051,"journal":{"name":"Computer Assisted Surgery","volume":"30 1","pages":"2466424"},"PeriodicalIF":1.5,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143434411","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Surgical hyperspectral imaging: a systematic review. 外科高光谱成像:系统回顾。
IF 1.9 4区 医学 Q3 SURGERY Pub Date : 2025-12-01 Epub Date: 2025-08-18 DOI: 10.1080/24699322.2025.2546819
Hafsa Moontari Ali, Yiming Xiao, Marta Kersten-Oertel

Hyperspectral imaging (HSI) is a technique that captures and processes information across a wide spectrum of wavelengths, providing detailed spectral data for each pixel in an image to identify and analyze materials or objects. In the surgical domain, it can provide quantitative and qualitative tissue information without the need of any contrast agent, thereby making it possible to distinguish between different tissue types objectively. In this article, we review the applications of hyperspectral imaging in surgery, focusing on: (1) hardware components and scanning mechanisms of HSI devices, (2) image preprocessing and processing/analysis methods, including classification, segmentation, tissue characterization, and perfusion analysis, and (3) the feasibility of HSI in various surgical procedures, based on human and animal studies. A systematic review of hyperspectral imaging based on PRISMA guideline was conducted using specific keywords: allintitle: hyperspectral AND intraoperative OR intervention OR surgery. After applying predefined inclusion and exclusion criteria, 85 papers from the literature were selected for analysis. Our systematic review shows that HSI has demonstrated significant potential as an intraoperative guidance tool, assisting surgeons during tumor resection by generating detailed tissue density maps. Additionally, HSI can play a role in hemodynamic monitoring, providing perfusion maps to assess blood flow during surgery and detect postoperative complications. Despite its promise, challenges, such as hardware limitations, real-time processing, and clinical integration remain, highlighting the need for further research and development to advance HSI in surgical applications.

高光谱成像(HSI)是一种捕获和处理宽光谱波长信息的技术,为图像中的每个像素提供详细的光谱数据,以识别和分析材料或物体。在外科领域,它可以提供定量和定性的组织信息,而不需要任何造影剂,从而可以客观地区分不同的组织类型。在本文中,我们回顾了高光谱成像在外科手术中的应用,重点介绍:(1)高光谱成像设备的硬件组成和扫描机制;(2)图像预处理和处理/分析方法,包括分类、分割、组织表征和灌注分析;(3)基于人体和动物研究的高光谱成像在各种外科手术中的可行性。本文对基于PRISMA指南的高光谱成像进行了系统综述,关键词:allintitle:高光谱与术中OR介入或手术。应用预先设定的纳入和排除标准,从文献中选择85篇论文进行分析。我们的系统综述显示,HSI作为术中指导工具具有巨大的潜力,通过生成详细的组织密度图来辅助外科医生进行肿瘤切除。此外,HSI可以在血流动力学监测中发挥作用,提供灌注图来评估手术过程中的血流并检测术后并发症。尽管前景光明,但硬件限制、实时处理和临床集成等挑战仍然存在,这突出了进一步研究和开发以推进HSI在外科应用中的必要性。
{"title":"Surgical hyperspectral imaging: a systematic review.","authors":"Hafsa Moontari Ali, Yiming Xiao, Marta Kersten-Oertel","doi":"10.1080/24699322.2025.2546819","DOIUrl":"10.1080/24699322.2025.2546819","url":null,"abstract":"<p><p>Hyperspectral imaging (HSI) is a technique that captures and processes information across a wide spectrum of wavelengths, providing detailed spectral data for each pixel in an image to identify and analyze materials or objects. In the surgical domain, it can provide quantitative and qualitative tissue information without the need of any contrast agent, thereby making it possible to distinguish between different tissue types objectively. In this article, we review the applications of hyperspectral imaging in surgery, focusing on: (1) hardware components and scanning mechanisms of HSI devices, (2) image preprocessing and processing/analysis methods, including classification, segmentation, tissue characterization, and perfusion analysis, and (3) the feasibility of HSI in various surgical procedures, based on human and animal studies. A systematic review of hyperspectral imaging based on PRISMA guideline was conducted using specific keywords: allintitle: hyperspectral AND intraoperative OR intervention OR surgery. After applying predefined inclusion and exclusion criteria, 85 papers from the literature were selected for analysis. Our systematic review shows that HSI has demonstrated significant potential as an intraoperative guidance tool, assisting surgeons during tumor resection by generating detailed tissue density maps. Additionally, HSI can play a role in hemodynamic monitoring, providing perfusion maps to assess blood flow during surgery and detect postoperative complications. Despite its promise, challenges, such as hardware limitations, real-time processing, and clinical integration remain, highlighting the need for further research and development to advance HSI in surgical applications.</p>","PeriodicalId":56051,"journal":{"name":"Computer Assisted Surgery","volume":"30 1","pages":"2546819"},"PeriodicalIF":1.9,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144876920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep learning methods for clinical workflow phase-based prediction of procedure duration: a benchmark study. 基于临床工作流程阶段的程序持续时间预测的深度学习方法:一项基准研究。
IF 1.5 4区 医学 Q3 SURGERY Pub Date : 2025-12-01 Epub Date: 2025-02-24 DOI: 10.1080/24699322.2025.2466426
Emanuele Frassini, Teddy S Vijfvinkel, Rick M Butler, Maarten van der Elst, Benno H W Hendriks, John J van den Dobbelsteen

This study evaluates the performance of deep learning models in the prediction of the end time of procedures performed in the cardiac catheterization laboratory (cath lab). We employed only the clinical phases derived from video analysis as input to the algorithms. Our results show that InceptionTime and LSTM-FCN yielded the most accurate predictions. InceptionTime achieves Mean Absolute Error (MAE) values below 5 min and Symmetric Mean Absolute Percentage Error (SMAPE) under 6% at 60-s sampling intervals. In contrast, LSTM with attention mechanism and standard LSTM models have higher error rates, indicating challenges in handling both long-term and short-term dependencies. CNN-based models, especially InceptionTime, excel at feature extraction across different scales, making them effective for time-series predictions. We also analyzed training and testing times. CNN models, despite higher computational costs, significantly reduce prediction errors. The Transformer model has the fastest inference time, making it ideal for real-time applications. An ensemble model derived by averaging the two best performing algorithms reported low MAE and SMAPE, although needing longer training. Future research should validate these findings across different procedural contexts and explore ways to optimize training times without losing accuracy. Integrating these models into clinical scheduling systems could improve efficiency in cath labs. Our research demonstrates that the models we implemented can form the basis of an automated tool, which predicts the optimal time to call the next patient with an average error of approximately 30 s. These findings show the effectiveness of deep learning models, especially CNN-based architectures, in accurately predicting procedure end times.

本研究评估了深度学习模型在心导管实验室(cath lab)中预测手术结束时间的性能。我们只采用从视频分析中得出的临床阶段作为算法的输入。我们的结果表明,InceptionTime和LSTM-FCN产生了最准确的预测。在60秒的采样间隔内,InceptionTime实现平均绝对误差(MAE)小于5分钟,对称平均绝对百分比误差(SMAPE)小于6%。相比之下,具有注意机制的LSTM和标准LSTM模型的错误率更高,这表明在处理长期和短期依赖关系方面都存在挑战。基于cnn的模型,尤其是InceptionTime,擅长于不同尺度的特征提取,这使得它们对时间序列预测有效。我们还分析了训练和测试时间。CNN模型尽管计算成本较高,但显著降低了预测误差。Transformer模型具有最快的推理时间,使其成为实时应用程序的理想选择。通过平均两种表现最好的算法得出的集成模型报告了低MAE和SMAPE,尽管需要更长的训练时间。未来的研究应该在不同的程序背景下验证这些发现,并探索在不失去准确性的情况下优化训练时间的方法。将这些模型集成到临床调度系统中可以提高导管室的效率。我们的研究表明,我们实现的模型可以形成一个自动化工具的基础,该工具预测呼叫下一个患者的最佳时间,平均误差约为30秒。这些发现表明,深度学习模型,特别是基于cnn的架构,在准确预测过程结束时间方面是有效的。
{"title":"Deep learning methods for clinical workflow phase-based prediction of procedure duration: a benchmark study.","authors":"Emanuele Frassini, Teddy S Vijfvinkel, Rick M Butler, Maarten van der Elst, Benno H W Hendriks, John J van den Dobbelsteen","doi":"10.1080/24699322.2025.2466426","DOIUrl":"10.1080/24699322.2025.2466426","url":null,"abstract":"<p><p>This study evaluates the performance of deep learning models in the prediction of the end time of procedures performed in the cardiac catheterization laboratory (cath lab). We employed only the clinical phases derived from video analysis as input to the algorithms. Our results show that InceptionTime and LSTM-FCN yielded the most accurate predictions. InceptionTime achieves Mean Absolute Error (MAE) values below 5 min and Symmetric Mean Absolute Percentage Error (SMAPE) under 6% at 60-s sampling intervals. In contrast, LSTM with attention mechanism and standard LSTM models have higher error rates, indicating challenges in handling both long-term and short-term dependencies. CNN-based models, especially InceptionTime, excel at feature extraction across different scales, making them effective for time-series predictions. We also analyzed training and testing times. CNN models, despite higher computational costs, significantly reduce prediction errors. The Transformer model has the fastest inference time, making it ideal for real-time applications. An ensemble model derived by averaging the two best performing algorithms reported low MAE and SMAPE, although needing longer training. Future research should validate these findings across different procedural contexts and explore ways to optimize training times without losing accuracy. Integrating these models into clinical scheduling systems could improve efficiency in cath labs. Our research demonstrates that the models we implemented can form the basis of an automated tool, which predicts the optimal time to call the next patient with an average error of approximately 30 s. These findings show the effectiveness of deep learning models, especially CNN-based architectures, in accurately predicting procedure end times.</p>","PeriodicalId":56051,"journal":{"name":"Computer Assisted Surgery","volume":"30 1","pages":"2466426"},"PeriodicalIF":1.5,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143484833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MLP-UNet: an algorithm for segmenting lesions in breast and thyroid ultrasound images. MLP-UNet:一种用于分割乳腺和甲状腺超声图像病变的算法。
IF 1.5 4区 医学 Q3 SURGERY Pub Date : 2025-12-01 Epub Date: 2025-06-28 DOI: 10.1080/24699322.2025.2523266
Tian-Feng Dong, Chang-Jiang Zhou, Zhen-Yi Huang, Hao Zhao, Xue-Long Wang, Shi-Ju Yan

Breast and thyroid cancers are among the most prevalent and fastest growing malignancies worldwide with ultrasound imaging serving as the primary modality for screening and surgical navigation of these lesions. Accurate and real-time lesion segmentation in ultrasound images is crucial for guiding precise needle placement during biopsies and surgeries. To address this clinical need, we propose MLP-UNet, a deep learning model for automatic segmentation of breast tumors and thyroid nodules in ultrasound images. MLP-UNet adopts an encoder-decoder architecture with a U-shaped structure and integrates a MLP-based module(MAP) module within the encoder stage. Attention module is a lightweight employed during the skip connections to enhance feature representation. Using only using 33.75 M parameters, MLP-UNet achieves state-of-the-art segmentation performance. On the BUSI, it attains Dice, IoU, and Recall of 80.61%, 67.93%, and 80.48%, respectively. And on the DDTI, it attains Dice, IoU, and Recall of 81.67% for Dice, 71.72%. These results outperform several classical and state-of-the-art segmentation networks while maintaining low computational complexity, highlighting its significant potential for clinical application in ultrasound-guided surgical navigation systems.

乳腺癌和甲状腺癌是世界上最普遍和发展最快的恶性肿瘤之一,超声成像是这些病变筛查和手术导航的主要方式。在活检和手术过程中,超声图像中准确、实时的病灶分割对于指导精确的针头放置至关重要。为了满足这一临床需求,我们提出了MLP-UNet,一种用于超声图像中乳腺肿瘤和甲状腺结节自动分割的深度学习模型。MLP-UNet采用u型结构的编码器-解码器架构,在编码器级内集成了一个基于mlp的模块(MAP)模块。注意模块是在跳过连接过程中使用的轻量级模块,用于增强特征表示。仅使用33.75 M个参数,MLP-UNet就实现了最先进的分割性能。在BUSI上,它的Dice、IoU和Recall分别达到了80.61%、67.93%和80.48%。在DDTI上,它达到了Dice, IoU和Recall的81.67%,Dice为71.72%。这些结果优于几种经典和最先进的分割网络,同时保持较低的计算复杂度,突出了其在超声引导手术导航系统中的临床应用潜力。
{"title":"MLP-UNet: an algorithm for segmenting lesions in breast and thyroid ultrasound images.","authors":"Tian-Feng Dong, Chang-Jiang Zhou, Zhen-Yi Huang, Hao Zhao, Xue-Long Wang, Shi-Ju Yan","doi":"10.1080/24699322.2025.2523266","DOIUrl":"10.1080/24699322.2025.2523266","url":null,"abstract":"<p><p>Breast and thyroid cancers are among the most prevalent and fastest growing malignancies worldwide with ultrasound imaging serving as the primary modality for screening and surgical navigation of these lesions. Accurate and real-time lesion segmentation in ultrasound images is crucial for guiding precise needle placement during biopsies and surgeries. To address this clinical need, we propose <b>MLP-UNet</b>, a deep learning model for automatic segmentation of breast tumors and thyroid nodules in ultrasound images. MLP-UNet adopts an encoder-decoder architecture with a U-shaped structure and integrates a MLP-based module(MAP) module within the encoder stage. Attention module is a lightweight employed during the skip connections to enhance feature representation. Using only using 33.75 M parameters, MLP-UNet achieves state-of-the-art segmentation performance. On the BUSI, it attains Dice, IoU, and Recall of 80.61%, 67.93%, and 80.48%, respectively. And on the DDTI, it attains Dice, IoU, and Recall of 81.67% for Dice, 71.72%. These results outperform several classical and state-of-the-art segmentation networks while maintaining low computational complexity, highlighting its significant potential for clinical application in ultrasound-guided surgical navigation systems.</p>","PeriodicalId":56051,"journal":{"name":"Computer Assisted Surgery","volume":"30 1","pages":"2523266"},"PeriodicalIF":1.5,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144531290","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Risk prediction and analysis of gallbladder polyps with deep neural network. 利用深度神经网络预测和分析胆囊息肉的风险。
IF 2.1 4区 医学 Q3 SURGERY Pub Date : 2024-12-01 Epub Date: 2024-03-23 DOI: 10.1080/24699322.2024.2331774
Kerong Yuan, Xiaofeng Zhang, Qian Yang, Xuesong Deng, Zhe Deng, Xiangyun Liao, Weixin Si
<p><p>The aim of this study is to analyze the risk factors associated with the development of adenomatous and malignant polyps in the gallbladder. Adenomatous polyps of the gallbladder are considered precancerous and have a high likelihood of progressing into malignancy. Preoperatively, distinguishing between benign gallbladder polyps, adenomatous polyps, and malignant polyps is challenging. Therefore, the objective is to develop a neural network model that utilizes these risk factors to accurately predict the nature of polyps. This predictive model can be employed to differentiate the nature of polyps before surgery, enhancing diagnostic accuracy. A retrospective study was done on patients who had cholecystectomy surgeries at the Department of Hepatobiliary Surgery of the Second People's Hospital of Shenzhen between January 2017 and December 2022. The patients' clinical characteristics, lab results, and ultrasonographic indices were examined. Using risk variables for the growth of adenomatous and malignant polyps in the gallbladder, a neural network model for predicting the kind of polyps will be created. A normalized confusion matrix, PR, and ROC curve were used to evaluate the performance of the model. In this comprehensive study, we meticulously analyzed a total of 287 cases of benign gallbladder polyps, 15 cases of adenomatous polyps, and 27 cases of malignant polyps. The data analysis revealed several significant findings. Specifically, hepatitis B core antibody (95% CI -0.237 to 0.061, <i>p</i> < 0.001), number of polyps (95% CI -0.214 to -0.052, <i>p</i> = 0.001), polyp size (95% CI 0.038 to 0.051, <i>p</i> < 0.001), wall thickness (95% CI 0.042 to 0.081, <i>p</i> < 0.001), and gallbladder size (95% CI 0.185 to 0.367, <i>p</i> < 0.001) emerged as independent predictors for gallbladder adenomatous polyps and malignant polyps. Based on these significant findings, we developed a predictive classification model for gallbladder polyps, represented as follows, Predictive classification model for GBPs = -0.149 * core antibody - 0.033 * number of polyps + 0.045 * polyp size + 0.061 * wall thickness + 0.276 * gallbladder size - 4.313. To assess the predictive efficiency of the model, we employed precision-recall (PR) and receiver operating characteristic (ROC) curves. The area under the curve (AUC) for the prediction model was 0.945 and 0.930, respectively, indicating excellent predictive capability. We determined that a polyp size of 10 mm served as the optimal cutoff value for diagnosing gallbladder adenoma, with a sensitivity of 81.5% and specificity of 60.0%. For the diagnosis of gallbladder cancer, the sensitivity and specificity were 81.5% and 92.5%, respectively. These findings highlight the potential of our predictive model and provide valuable insights into accurate diagnosis and risk assessment for gallbladder polyps. We identified several risk factors associated with the development of adenomatous and malignant polyps in the gallbladder
本研究旨在分析与胆囊腺瘤性息肉和恶性息肉发展相关的风险因素。胆囊腺瘤性息肉被认为是癌前病变,极有可能发展为恶性肿瘤。术前区分良性胆囊息肉、腺瘤性息肉和恶性息肉具有挑战性。因此,我们的目标是开发一种神经网络模型,利用这些风险因素准确预测息肉的性质。该预测模型可用于在手术前区分息肉的性质,从而提高诊断的准确性。本研究对 2017 年 1 月至 2022 年 12 月期间在深圳市第二人民医院肝胆外科接受胆囊切除手术的患者进行了回顾性研究。研究考察了患者的临床特征、实验室结果和超声检查指标。利用胆囊腺瘤性息肉和恶性息肉生长的风险变量,建立预测息肉种类的神经网络模型。我们使用归一化混淆矩阵、PR 和 ROC 曲线来评估模型的性能。在这项综合研究中,我们仔细分析了 287 例良性胆囊息肉、15 例腺瘤性息肉和 27 例恶性息肉。数据分析发现了几项重要发现。具体来说,乙肝核心抗体(95% CI -0.237~0.061,p p = 0.001)、息肉大小(95% CI 0.038~0.051,p p
{"title":"Risk prediction and analysis of gallbladder polyps with deep neural network.","authors":"Kerong Yuan, Xiaofeng Zhang, Qian Yang, Xuesong Deng, Zhe Deng, Xiangyun Liao, Weixin Si","doi":"10.1080/24699322.2024.2331774","DOIUrl":"10.1080/24699322.2024.2331774","url":null,"abstract":"&lt;p&gt;&lt;p&gt;The aim of this study is to analyze the risk factors associated with the development of adenomatous and malignant polyps in the gallbladder. Adenomatous polyps of the gallbladder are considered precancerous and have a high likelihood of progressing into malignancy. Preoperatively, distinguishing between benign gallbladder polyps, adenomatous polyps, and malignant polyps is challenging. Therefore, the objective is to develop a neural network model that utilizes these risk factors to accurately predict the nature of polyps. This predictive model can be employed to differentiate the nature of polyps before surgery, enhancing diagnostic accuracy. A retrospective study was done on patients who had cholecystectomy surgeries at the Department of Hepatobiliary Surgery of the Second People's Hospital of Shenzhen between January 2017 and December 2022. The patients' clinical characteristics, lab results, and ultrasonographic indices were examined. Using risk variables for the growth of adenomatous and malignant polyps in the gallbladder, a neural network model for predicting the kind of polyps will be created. A normalized confusion matrix, PR, and ROC curve were used to evaluate the performance of the model. In this comprehensive study, we meticulously analyzed a total of 287 cases of benign gallbladder polyps, 15 cases of adenomatous polyps, and 27 cases of malignant polyps. The data analysis revealed several significant findings. Specifically, hepatitis B core antibody (95% CI -0.237 to 0.061, &lt;i&gt;p&lt;/i&gt; &lt; 0.001), number of polyps (95% CI -0.214 to -0.052, &lt;i&gt;p&lt;/i&gt; = 0.001), polyp size (95% CI 0.038 to 0.051, &lt;i&gt;p&lt;/i&gt; &lt; 0.001), wall thickness (95% CI 0.042 to 0.081, &lt;i&gt;p&lt;/i&gt; &lt; 0.001), and gallbladder size (95% CI 0.185 to 0.367, &lt;i&gt;p&lt;/i&gt; &lt; 0.001) emerged as independent predictors for gallbladder adenomatous polyps and malignant polyps. Based on these significant findings, we developed a predictive classification model for gallbladder polyps, represented as follows, Predictive classification model for GBPs = -0.149 * core antibody - 0.033 * number of polyps + 0.045 * polyp size + 0.061 * wall thickness + 0.276 * gallbladder size - 4.313. To assess the predictive efficiency of the model, we employed precision-recall (PR) and receiver operating characteristic (ROC) curves. The area under the curve (AUC) for the prediction model was 0.945 and 0.930, respectively, indicating excellent predictive capability. We determined that a polyp size of 10 mm served as the optimal cutoff value for diagnosing gallbladder adenoma, with a sensitivity of 81.5% and specificity of 60.0%. For the diagnosis of gallbladder cancer, the sensitivity and specificity were 81.5% and 92.5%, respectively. These findings highlight the potential of our predictive model and provide valuable insights into accurate diagnosis and risk assessment for gallbladder polyps. We identified several risk factors associated with the development of adenomatous and malignant polyps in the gallbladder","PeriodicalId":56051,"journal":{"name":"Computer Assisted Surgery","volume":"29 1","pages":"2331774"},"PeriodicalIF":2.1,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140195203","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A decade of progress: bringing mixed reality image-guided surgery systems in the operating room. 十年进展:将混合现实图像引导手术系统引入手术室。
IF 1.5 4区 医学 Q3 SURGERY Pub Date : 2024-12-01 Epub Date: 2024-05-24 DOI: 10.1080/24699322.2024.2355897
Zahra Asadi, Mehrdad Asadi, Negar Kazemipour, Étienne Léger, Marta Kersten-Oertel

Advancements in mixed reality (MR) have led to innovative approaches in image-guided surgery (IGS). In this paper, we provide a comprehensive analysis of the current state of MR in image-guided procedures across various surgical domains. Using the Data Visualization View (DVV) Taxonomy, we analyze the progress made since a 2013 literature review paper on MR IGS systems. In addition to examining the current surgical domains using MR systems, we explore trends in types of MR hardware used, type of data visualized, visualizations of virtual elements, and interaction methods in use. Our analysis also covers the metrics used to evaluate these systems in the operating room (OR), both qualitative and quantitative assessments, and clinical studies that have demonstrated the potential of MR technologies to enhance surgical workflows and outcomes. We also address current challenges and future directions that would further establish the use of MR in IGS.

混合现实(MR)技术的进步为图像引导手术(IGS)带来了创新方法。在本文中,我们全面分析了混合现实技术在不同手术领域的图像引导手术中的应用现状。利用数据可视化视图(DVV)分类标准,我们分析了自2013年关于磁共振IGS系统的文献综述论文发表以来所取得的进展。除了研究当前使用磁共振系统的手术领域外,我们还探讨了所使用的磁共振硬件类型、可视化数据类型、虚拟元素可视化以及交互方法的发展趋势。我们的分析还包括用于评估手术室(OR)中这些系统的指标、定性和定量评估,以及已证明磁共振技术在提高手术工作流程和结果方面潜力的临床研究。我们还讨论了当前的挑战和未来的发展方向,这些挑战和方向将进一步确立磁共振技术在 IGS 中的应用。
{"title":"A decade of progress: bringing mixed reality image-guided surgery systems in the operating room.","authors":"Zahra Asadi, Mehrdad Asadi, Negar Kazemipour, Étienne Léger, Marta Kersten-Oertel","doi":"10.1080/24699322.2024.2355897","DOIUrl":"10.1080/24699322.2024.2355897","url":null,"abstract":"<p><p>Advancements in mixed reality (MR) have led to innovative approaches in image-guided surgery (IGS). In this paper, we provide a comprehensive analysis of the current state of MR in image-guided procedures across various surgical domains. Using the Data Visualization View (DVV) Taxonomy, we analyze the progress made since a 2013 literature review paper on MR IGS systems. In addition to examining the current surgical domains using MR systems, we explore trends in types of MR hardware used, type of data visualized, visualizations of virtual elements, and interaction methods in use. Our analysis also covers the metrics used to evaluate these systems in the operating room (OR), both qualitative and quantitative assessments, and clinical studies that have demonstrated the potential of MR technologies to enhance surgical workflows and outcomes. We also address current challenges and future directions that would further establish the use of MR in IGS.</p>","PeriodicalId":56051,"journal":{"name":"Computer Assisted Surgery","volume":"29 1","pages":"2355897"},"PeriodicalIF":1.5,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141094751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Feasibility of proton dosimetry overriding planning CT with daily CBCT elaborated through generative artificial intelligence tools. 通过生成式人工智能工具精心设计的每日 CBCT,质子剂量测定覆盖计划 CT 的可行性。
IF 2.1 4区 医学 Q3 SURGERY Pub Date : 2024-12-01 Epub Date: 2024-03-11 DOI: 10.1080/24699322.2024.2327981
Matteo Rossi, Gabriele Belotti, Luca Mainardi, Guido Baroni, Pietro Cerveri

Radiotherapy commonly utilizes cone beam computed tomography (CBCT) for patient positioning and treatment monitoring. CBCT is deemed to be secure for patients, making it suitable for the delivery of fractional doses. However, limitations such as a narrow field of view, beam hardening, scattered radiation artifacts, and variability in pixel intensity hinder the direct use of raw CBCT for dose recalculation during treatment. To address this issue, reliable correction techniques are necessary to remove artifacts and remap pixel intensity into Hounsfield Units (HU) values. This study proposes a deep-learning framework for calibrating CBCT images acquired with narrow field of view (FOV) systems and demonstrates its potential use in proton treatment planning updates. Cycle-consistent generative adversarial networks (cGAN) processes raw CBCT to reduce scatter and remap HU. Monte Carlo simulation is used to generate CBCT scans, enabling the possibility to focus solely on the algorithm's ability to reduce artifacts and cupping effects without considering intra-patient longitudinal variability and producing a fair comparison between planning CT (pCT) and calibrated CBCT dosimetry. To showcase the viability of the approach using real-world data, experiments were also conducted using real CBCT. Tests were performed on a publicly available dataset of 40 patients who received ablative radiation therapy for pancreatic cancer. The simulated CBCT calibration led to a difference in proton dosimetry of less than 2%, compared to the planning CT. The potential toxicity effect on the organs at risk decreased from about 50% (uncalibrated) up the 2% (calibrated). The gamma pass rate at 3%/2 mm produced an improvement of about 37% in replicating the prescribed dose before and after calibration (53.78% vs 90.26%). Real data also confirmed this with slightly inferior performances for the same criteria (65.36% vs 87.20%). These results may confirm that generative artificial intelligence brings the use of narrow FOV CBCT scans incrementally closer to clinical translation in proton therapy planning updates.

放疗通常使用锥形束计算机断层扫描(CBCT)来定位病人和监控治疗。锥形束计算机断层扫描被认为对患者是安全的,因此适用于提供点剂量。然而,视场狭窄、光束硬化、散射辐射伪影和像素强度变化等局限性阻碍了在治疗过程中直接使用原始 CBCT 进行剂量重新计算。为解决这一问题,需要可靠的校正技术来去除伪影,并将像素强度重新映射为 Hounsfield 单位(HU)值。本研究提出了一种深度学习框架,用于校准用窄视场(FOV)系统获取的 CBCT 图像,并展示了其在质子治疗计划更新中的潜在用途。循环一致性生成对抗网络(cGAN)处理原始 CBCT 图像,以减少散射和重映射 HU。蒙特卡洛模拟用于生成 CBCT 扫描,从而可以只关注算法减少伪影和杯突效应的能力,而不考虑患者内部的纵向变异性,并在计划 CT(pCT)和校准 CBCT 剂量测定之间进行公平比较。为了展示该方法在真实世界数据中的可行性,我们还使用真实的 CBCT 进行了实验。测试在一个公开的数据集上进行,该数据集包含 40 名接受胰腺癌消融放射治疗的患者。与规划 CT 相比,模拟 CBCT 校准导致质子剂量测定的差异小于 2%。对危险器官的潜在毒性影响从大约 50%(未校准)下降到 2%(校准)。校准前后,3%/2 毫米的伽马通过率在复制规定剂量方面提高了约 37%(53.78% 对 90.26%)。真实数据也证实了这一点,相同标准下的表现略逊一筹(65.36% 对 87.20%)。这些结果可以证实,生成式人工智能使窄视场 CBCT 扫描的使用逐渐接近质子治疗计划更新的临床应用。
{"title":"Feasibility of proton dosimetry overriding planning CT with daily CBCT elaborated through generative artificial intelligence tools.","authors":"Matteo Rossi, Gabriele Belotti, Luca Mainardi, Guido Baroni, Pietro Cerveri","doi":"10.1080/24699322.2024.2327981","DOIUrl":"10.1080/24699322.2024.2327981","url":null,"abstract":"<p><p>Radiotherapy commonly utilizes cone beam computed tomography (CBCT) for patient positioning and treatment monitoring. CBCT is deemed to be secure for patients, making it suitable for the delivery of fractional doses. However, limitations such as a narrow field of view, beam hardening, scattered radiation artifacts, and variability in pixel intensity hinder the direct use of raw CBCT for dose recalculation during treatment. To address this issue, reliable correction techniques are necessary to remove artifacts and remap pixel intensity into Hounsfield Units (HU) values. This study proposes a deep-learning framework for calibrating CBCT images acquired with narrow field of view (FOV) systems and demonstrates its potential use in proton treatment planning updates. Cycle-consistent generative adversarial networks (cGAN) processes raw CBCT to reduce scatter and remap HU. Monte Carlo simulation is used to generate CBCT scans, enabling the possibility to focus solely on the algorithm's ability to reduce artifacts and cupping effects without considering intra-patient longitudinal variability and producing a fair comparison between planning CT (pCT) and calibrated CBCT dosimetry. To showcase the viability of the approach using real-world data, experiments were also conducted using real CBCT. Tests were performed on a publicly available dataset of 40 patients who received ablative radiation therapy for pancreatic cancer. The simulated CBCT calibration led to a difference in proton dosimetry of less than 2%, compared to the planning CT. The potential toxicity effect on the organs at risk decreased from about 50% (uncalibrated) up the 2% (calibrated). The gamma pass rate at 3%/2 mm produced an improvement of about 37% in replicating the prescribed dose before and after calibration (53.78% vs 90.26%). Real data also confirmed this with slightly inferior performances for the same criteria (65.36% vs 87.20%). These results may confirm that generative artificial intelligence brings the use of narrow FOV CBCT scans incrementally closer to clinical translation in proton therapy planning updates.</p>","PeriodicalId":56051,"journal":{"name":"Computer Assisted Surgery","volume":"29 1","pages":"2327981"},"PeriodicalIF":2.1,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140102858","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Prediction of additional hospital days in patients undergoing cervical spine surgery with machine learning methods. 用机器学习方法预测颈椎手术患者的额外住院日。
IF 1.5 4区 医学 Q3 SURGERY Pub Date : 2024-12-01 Epub Date: 2024-06-11 DOI: 10.1080/24699322.2024.2345066
Bin Zhang, Shengsheng Huang, Chenxing Zhou, Jichong Zhu, Tianyou Chen, Sitan Feng, Chengqian Huang, Zequn Wang, Shaofeng Wu, Chong Liu, Xinli Zhan

Background: Machine learning (ML), a subset of artificial intelligence (AI), uses algorithms to analyze data and predict outcomes without extensive human intervention. In healthcare, ML is gaining attention for enhancing patient outcomes. This study focuses on predicting additional hospital days (AHD) for patients with cervical spondylosis (CS), a condition affecting the cervical spine. The research aims to develop an ML-based nomogram model analyzing clinical and demographic factors to estimate hospital length of stay (LOS). Accurate AHD predictions enable efficient resource allocation, improved patient care, and potential cost reduction in healthcare.

Methods: The study selected CS patients undergoing cervical spine surgery and investigated their medical data. A total of 945 patients were recruited, with 570 males and 375 females. The mean number of LOS calculated for the total sample was 8.64 ± 3.7 days. A LOS equal to or <8.64 days was categorized as the AHD-negative group (n = 539), and a LOS > 8.64 days comprised the AHD-positive group (n = 406). The collected data was randomly divided into training and validation cohorts using a 7:3 ratio. The parameters included their general conditions, chronic diseases, preoperative clinical scores, and preoperative radiographic data including ossification of the anterior longitudinal ligament (OALL), ossification of the posterior longitudinal ligament (OPLL), cervical instability and magnetic resonance imaging T2-weighted imaging high signal (MRI T2WIHS), operative indicators and complications. ML-based models like Lasso regression, random forest (RF), and support vector machine (SVM) recursive feature elimination (SVM-RFE) were developed for predicting AHD-related risk factors. The intersections of the variables screened by the aforementioned algorithms were utilized to construct a nomogram model for predicting AHD in patients. The area under the curve (AUC) of the receiver operating characteristic (ROC) curve and C-index were used to evaluate the performance of the nomogram. Calibration curve and decision curve analysis (DCA) were performed to test the calibration performance and clinical utility.

Results: For these participants, 25 statistically significant parameters were identified as risk factors for AHD. Among these, nine factors were obtained as the intersection factors of these three ML algorithms and were used to develop a nomogram model. These factors were gender, age, body mass index (BMI), American Spinal Injury Association (ASIA) scores, magnetic resonance imaging T2-weighted imaging high signal (MRI T2WIHS), operated segment, intraoperative bleeding volume, the volume of drainage, and diabetes. After model validation, the AUC was 0.753 in the training cohort and 0.777 in the validation cohort. The calibration curve exhibited a satisfactory agreement between the nomogram predictions and actual probabilities. T

背景:机器学习(ML)是人工智能(AI)的一个分支,它使用算法分析数据并预测结果,无需大量人工干预。在医疗保健领域,ML 在提高患者预后方面的作用越来越受到关注。本研究的重点是预测颈椎病(CS)患者的额外住院日(AHD),颈椎病是一种影响颈椎的疾病。研究旨在开发一种基于 ML 的提名图模型,通过分析临床和人口统计因素来估算住院时间(LOS)。准确的住院时间预测可实现有效的资源分配、改善患者护理并降低医疗成本:研究选择了接受颈椎手术的 CS 患者,并调查了他们的医疗数据。共招募了 945 名患者,其中男性 570 名,女性 375 名。所有样本的平均住院日为 8.64±3.7 天。LOS 等于或 n = 539)和 LOS > 8.64 天的患者组成 AHD 阳性组(n = 406)。收集到的数据按 7:3 的比例随机分为训练组和验证组。参数包括患者的一般情况、慢性疾病、术前临床评分、术前影像学数据,包括前纵韧带骨化(OALL)、后纵韧带骨化(OPLL)、颈椎不稳和磁共振成像 T2 加权成像高信号(MRI T2WIHS)、手术指标和并发症。研究人员开发了基于 ML 的模型,如 Lasso 回归、随机森林(RF)和支持向量机(SVM)递归特征消除(SVM-RFE),用于预测与 AHD 相关的风险因素。利用上述算法筛选出的变量的交叉点构建了预测患者急性心肌梗死的提名图模型。接受者操作特征曲线(ROC)的曲线下面积(AUC)和 C 指数用于评估提名图的性能。校准曲线和决策曲线分析(DCA)用于测试校准性能和临床实用性:结果:在这些参与者中,有 25 个具有统计学意义的参数被确定为急性心肌缺血风险因素。其中,有九个因素是这三种 ML 算法的交叉因素,并被用于建立一个提名图模型。这些因素包括性别、年龄、体重指数(BMI)、美国脊柱损伤协会(ASIA)评分、磁共振成像 T2 加权成像高信号(MRI T2WIHS)、手术区段、术中出血量、引流量和糖尿病。模型验证后,训练队列的 AUC 为 0.753,验证队列的 AUC 为 0.777。校准曲线显示,提名图预测与实际概率之间的一致性令人满意。C 指数为 0.788(95% 置信区间:0.73214-0.84386)。在决策曲线分析(DCA)中,提名图的阈值概率范围为 1%至 99%(训练队列)和 1%至 75%(验证队列):我们成功建立了一个用于预测颈椎手术患者 AHD 的 ML 模型,展示了该模型在支持临床医生识别 AHD 和改进围手术期治疗策略方面的潜力。
{"title":"Prediction of additional hospital days in patients undergoing cervical spine surgery with machine learning methods.","authors":"Bin Zhang, Shengsheng Huang, Chenxing Zhou, Jichong Zhu, Tianyou Chen, Sitan Feng, Chengqian Huang, Zequn Wang, Shaofeng Wu, Chong Liu, Xinli Zhan","doi":"10.1080/24699322.2024.2345066","DOIUrl":"10.1080/24699322.2024.2345066","url":null,"abstract":"<p><strong>Background: </strong>Machine learning (ML), a subset of artificial intelligence (AI), uses algorithms to analyze data and predict outcomes without extensive human intervention. In healthcare, ML is gaining attention for enhancing patient outcomes. This study focuses on predicting additional hospital days (AHD) for patients with cervical spondylosis (CS), a condition affecting the cervical spine. The research aims to develop an ML-based nomogram model analyzing clinical and demographic factors to estimate hospital length of stay (LOS). Accurate AHD predictions enable efficient resource allocation, improved patient care, and potential cost reduction in healthcare.</p><p><strong>Methods: </strong>The study selected CS patients undergoing cervical spine surgery and investigated their medical data. A total of 945 patients were recruited, with 570 males and 375 females. The mean number of LOS calculated for the total sample was 8.64 ± 3.7 days. A LOS equal to or <8.64 days was categorized as the AHD-negative group (<i>n</i> = 539), and a LOS > 8.64 days comprised the AHD-positive group (<i>n</i> = 406). The collected data was randomly divided into training and validation cohorts using a 7:3 ratio. The parameters included their general conditions, chronic diseases, preoperative clinical scores, and preoperative radiographic data including ossification of the anterior longitudinal ligament (OALL), ossification of the posterior longitudinal ligament (OPLL), cervical instability and magnetic resonance imaging T2-weighted imaging high signal (MRI T2WIHS), operative indicators and complications. ML-based models like Lasso regression, random forest (RF), and support vector machine (SVM) recursive feature elimination (SVM-RFE) were developed for predicting AHD-related risk factors. The intersections of the variables screened by the aforementioned algorithms were utilized to construct a nomogram model for predicting AHD in patients. The area under the curve (AUC) of the receiver operating characteristic (ROC) curve and C-index were used to evaluate the performance of the nomogram. Calibration curve and decision curve analysis (DCA) were performed to test the calibration performance and clinical utility.</p><p><strong>Results: </strong>For these participants, 25 statistically significant parameters were identified as risk factors for AHD. Among these, nine factors were obtained as the intersection factors of these three ML algorithms and were used to develop a nomogram model. These factors were gender, age, body mass index (BMI), American Spinal Injury Association (ASIA) scores, magnetic resonance imaging T2-weighted imaging high signal (MRI T2WIHS), operated segment, intraoperative bleeding volume, the volume of drainage, and diabetes. After model validation, the AUC was 0.753 in the training cohort and 0.777 in the validation cohort. The calibration curve exhibited a satisfactory agreement between the nomogram predictions and actual probabilities. T","PeriodicalId":56051,"journal":{"name":"Computer Assisted Surgery","volume":"29 1","pages":"2345066"},"PeriodicalIF":1.5,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141302103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SwinD-Net: a lightweight segmentation network for laparoscopic liver segmentation. SwinD-Net:用于腹腔镜肝脏分割的轻量级分割网络。
IF 2.1 4区 医学 Q3 SURGERY Pub Date : 2024-12-01 Epub Date: 2024-03-20 DOI: 10.1080/24699322.2024.2329675
Shuiming Ouyang, Baochun He, Huoling Luo, Fucang Jia

The real-time requirement for image segmentation in laparoscopic surgical assistance systems is extremely high. Although traditional deep learning models can ensure high segmentation accuracy, they suffer from a large computational burden. In the practical setting of most hospitals, where powerful computing resources are lacking, these models cannot meet the real-time computational demands. We propose a novel network SwinD-Net based on Skip connections, incorporating Depthwise separable convolutions and Swin Transformer Blocks. To reduce computational overhead, we eliminate the skip connection in the first layer and reduce the number of channels in shallow feature maps. Additionally, we introduce Swin Transformer Blocks, which have a larger computational and parameter footprint, to extract global information and capture high-level semantic features. Through these modifications, our network achieves desirable performance while maintaining a lightweight design. We conduct experiments on the CholecSeg8k dataset to validate the effectiveness of our approach. Compared to other models, our approach achieves high accuracy while significantly reducing computational and parameter overhead. Specifically, our model requires only 98.82 M floating-point operations (FLOPs) and 0.52 M parameters, with an inference time of 47.49 ms per image on a CPU. Compared to the recently proposed lightweight segmentation network UNeXt, our model not only outperforms it in terms of the Dice metric but also has only 1/3 of the parameters and 1/22 of the FLOPs. In addition, our model achieves a 2.4 times faster inference speed than UNeXt, demonstrating comprehensive improvements in both accuracy and speed. Our model effectively reduces parameter count and computational complexity, improving the inference speed while maintaining comparable accuracy. The source code will be available at https://github.com/ouyangshuiming/SwinDNet.

腹腔镜手术辅助系统对图像分割的实时性要求极高。传统的深度学习模型虽然能确保较高的分割精度,但却存在较大的计算负担。在大多数医院缺乏强大计算资源的实际环境中,这些模型无法满足实时计算需求。我们提出了一种基于 Skip 连接的新型网络 SwinD-Net,其中包含深度可分离卷积和 Swin 变换块。为了减少计算开销,我们取消了第一层中的跳转连接,并减少了浅层特征图中的通道数量。此外,我们还引入了计算量和参数占用较大的 Swin 变换器块,以提取全局信息并捕捉高级语义特征。通过这些修改,我们的网络在保持轻量级设计的同时实现了理想的性能。我们在 CholecSeg8k 数据集上进行了实验,以验证我们方法的有效性。与其他模型相比,我们的方法在实现高准确度的同时,还大大减少了计算和参数开销。具体来说,我们的模型只需 98.82 M 次浮点运算(FLOP)和 0.52 M 个参数,每幅图像在 CPU 上的推理时间为 47.49 ms。与最近提出的轻量级分割网络 UNeXt 相比,我们的模型不仅在 Dice 指标上优于它,而且参数数量只有它的 1/3,FLOP 只有它的 1/22。此外,我们的模型推理速度是 UNeXt 的 2.4 倍,在准确性和速度方面都有全面提升。我们的模型有效减少了参数数量和计算复杂度,在提高推理速度的同时保持了相当的准确性。源代码可在 https://github.com/ouyangshuiming/SwinDNet 上获取。
{"title":"SwinD-Net: a lightweight segmentation network for laparoscopic liver segmentation.","authors":"Shuiming Ouyang, Baochun He, Huoling Luo, Fucang Jia","doi":"10.1080/24699322.2024.2329675","DOIUrl":"10.1080/24699322.2024.2329675","url":null,"abstract":"<p><p>The real-time requirement for image segmentation in laparoscopic surgical assistance systems is extremely high. Although traditional deep learning models can ensure high segmentation accuracy, they suffer from a large computational burden. In the practical setting of most hospitals, where powerful computing resources are lacking, these models cannot meet the real-time computational demands. We propose a novel network SwinD-Net based on Skip connections, incorporating Depthwise separable convolutions and Swin Transformer Blocks. To reduce computational overhead, we eliminate the skip connection in the first layer and reduce the number of channels in shallow feature maps. Additionally, we introduce Swin Transformer Blocks, which have a larger computational and parameter footprint, to extract global information and capture high-level semantic features. Through these modifications, our network achieves desirable performance while maintaining a lightweight design. We conduct experiments on the CholecSeg8k dataset to validate the effectiveness of our approach. Compared to other models, our approach achieves high accuracy while significantly reducing computational and parameter overhead. Specifically, our model requires only 98.82 M floating-point operations (FLOPs) and 0.52 M parameters, with an inference time of 47.49 ms per image on a CPU. Compared to the recently proposed lightweight segmentation network UNeXt, our model not only outperforms it in terms of the Dice metric but also has only 1/3 of the parameters and 1/22 of the FLOPs. In addition, our model achieves a 2.4 times faster inference speed than UNeXt, demonstrating comprehensive improvements in both accuracy and speed. Our model effectively reduces parameter count and computational complexity, improving the inference speed while maintaining comparable accuracy. The source code will be available at https://github.com/ouyangshuiming/SwinDNet.</p>","PeriodicalId":56051,"journal":{"name":"Computer Assisted Surgery","volume":"29 1","pages":"2329675"},"PeriodicalIF":2.1,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140177886","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computer Assisted Surgery
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1