Pub Date : 2025-12-01Epub Date: 2025-01-22DOI: 10.1080/24699322.2025.2456303
Chen Yang, Lei Chen, Xiangyu Xie, Changping Wu, Qianyun Wang
Desmoid fibromatosis (DF) is a rare low-grade benign myofibroblastic neoplasm that originates from fascia and muscle striae. For giant chest wall DF, surgical resection offer a radical form of treatment and the causing defects usually need repair and reconstruction, which can restore the structural integrity and rigidity of the thoracic cage. The past decade witnessed rapid advances in the application of various prosthetic material in thoracic surgery. However, three-dimensional (3D)-printed custom-made titanium ribs have never been reported for chest wall reconstruction post-DF resection. Here, we report a successful implantation of individualized 3D-printed titanium ribs to repair the chest wall defect in a patient with DF.
{"title":"Three-dimensional (3D)-printed custom-made titanium ribs for chest wall reconstruction post-desmoid fibromatosis resection.","authors":"Chen Yang, Lei Chen, Xiangyu Xie, Changping Wu, Qianyun Wang","doi":"10.1080/24699322.2025.2456303","DOIUrl":"https://doi.org/10.1080/24699322.2025.2456303","url":null,"abstract":"<p><p>Desmoid fibromatosis (DF) is a rare low-grade benign myofibroblastic neoplasm that originates from fascia and muscle striae. For giant chest wall DF, surgical resection offer a radical form of treatment and the causing defects usually need repair and reconstruction, which can restore the structural integrity and rigidity of the thoracic cage. The past decade witnessed rapid advances in the application of various prosthetic material in thoracic surgery. However, three-dimensional (3D)-printed custom-made titanium ribs have never been reported for chest wall reconstruction post-DF resection. Here, we report a successful implantation of individualized 3D-printed titanium ribs to repair the chest wall defect in a patient with DF.</p>","PeriodicalId":56051,"journal":{"name":"Computer Assisted Surgery","volume":"30 1","pages":"2456303"},"PeriodicalIF":1.5,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143017030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
<p><p>The aim of this study is to analyze the risk factors associated with the development of adenomatous and malignant polyps in the gallbladder. Adenomatous polyps of the gallbladder are considered precancerous and have a high likelihood of progressing into malignancy. Preoperatively, distinguishing between benign gallbladder polyps, adenomatous polyps, and malignant polyps is challenging. Therefore, the objective is to develop a neural network model that utilizes these risk factors to accurately predict the nature of polyps. This predictive model can be employed to differentiate the nature of polyps before surgery, enhancing diagnostic accuracy. A retrospective study was done on patients who had cholecystectomy surgeries at the Department of Hepatobiliary Surgery of the Second People's Hospital of Shenzhen between January 2017 and December 2022. The patients' clinical characteristics, lab results, and ultrasonographic indices were examined. Using risk variables for the growth of adenomatous and malignant polyps in the gallbladder, a neural network model for predicting the kind of polyps will be created. A normalized confusion matrix, PR, and ROC curve were used to evaluate the performance of the model. In this comprehensive study, we meticulously analyzed a total of 287 cases of benign gallbladder polyps, 15 cases of adenomatous polyps, and 27 cases of malignant polyps. The data analysis revealed several significant findings. Specifically, hepatitis B core antibody (95% CI -0.237 to 0.061, <i>p</i> < 0.001), number of polyps (95% CI -0.214 to -0.052, <i>p</i> = 0.001), polyp size (95% CI 0.038 to 0.051, <i>p</i> < 0.001), wall thickness (95% CI 0.042 to 0.081, <i>p</i> < 0.001), and gallbladder size (95% CI 0.185 to 0.367, <i>p</i> < 0.001) emerged as independent predictors for gallbladder adenomatous polyps and malignant polyps. Based on these significant findings, we developed a predictive classification model for gallbladder polyps, represented as follows, Predictive classification model for GBPs = -0.149 * core antibody - 0.033 * number of polyps + 0.045 * polyp size + 0.061 * wall thickness + 0.276 * gallbladder size - 4.313. To assess the predictive efficiency of the model, we employed precision-recall (PR) and receiver operating characteristic (ROC) curves. The area under the curve (AUC) for the prediction model was 0.945 and 0.930, respectively, indicating excellent predictive capability. We determined that a polyp size of 10 mm served as the optimal cutoff value for diagnosing gallbladder adenoma, with a sensitivity of 81.5% and specificity of 60.0%. For the diagnosis of gallbladder cancer, the sensitivity and specificity were 81.5% and 92.5%, respectively. These findings highlight the potential of our predictive model and provide valuable insights into accurate diagnosis and risk assessment for gallbladder polyps. We identified several risk factors associated with the development of adenomatous and malignant polyps in the gallbladder
本研究旨在分析与胆囊腺瘤性息肉和恶性息肉发展相关的风险因素。胆囊腺瘤性息肉被认为是癌前病变,极有可能发展为恶性肿瘤。术前区分良性胆囊息肉、腺瘤性息肉和恶性息肉具有挑战性。因此,我们的目标是开发一种神经网络模型,利用这些风险因素准确预测息肉的性质。该预测模型可用于在手术前区分息肉的性质,从而提高诊断的准确性。本研究对 2017 年 1 月至 2022 年 12 月期间在深圳市第二人民医院肝胆外科接受胆囊切除手术的患者进行了回顾性研究。研究考察了患者的临床特征、实验室结果和超声检查指标。利用胆囊腺瘤性息肉和恶性息肉生长的风险变量,建立预测息肉种类的神经网络模型。我们使用归一化混淆矩阵、PR 和 ROC 曲线来评估模型的性能。在这项综合研究中,我们仔细分析了 287 例良性胆囊息肉、15 例腺瘤性息肉和 27 例恶性息肉。数据分析发现了几项重要发现。具体来说,乙肝核心抗体(95% CI -0.237~0.061,p p = 0.001)、息肉大小(95% CI 0.038~0.051,p p
{"title":"Risk prediction and analysis of gallbladder polyps with deep neural network.","authors":"Kerong Yuan, Xiaofeng Zhang, Qian Yang, Xuesong Deng, Zhe Deng, Xiangyun Liao, Weixin Si","doi":"10.1080/24699322.2024.2331774","DOIUrl":"10.1080/24699322.2024.2331774","url":null,"abstract":"<p><p>The aim of this study is to analyze the risk factors associated with the development of adenomatous and malignant polyps in the gallbladder. Adenomatous polyps of the gallbladder are considered precancerous and have a high likelihood of progressing into malignancy. Preoperatively, distinguishing between benign gallbladder polyps, adenomatous polyps, and malignant polyps is challenging. Therefore, the objective is to develop a neural network model that utilizes these risk factors to accurately predict the nature of polyps. This predictive model can be employed to differentiate the nature of polyps before surgery, enhancing diagnostic accuracy. A retrospective study was done on patients who had cholecystectomy surgeries at the Department of Hepatobiliary Surgery of the Second People's Hospital of Shenzhen between January 2017 and December 2022. The patients' clinical characteristics, lab results, and ultrasonographic indices were examined. Using risk variables for the growth of adenomatous and malignant polyps in the gallbladder, a neural network model for predicting the kind of polyps will be created. A normalized confusion matrix, PR, and ROC curve were used to evaluate the performance of the model. In this comprehensive study, we meticulously analyzed a total of 287 cases of benign gallbladder polyps, 15 cases of adenomatous polyps, and 27 cases of malignant polyps. The data analysis revealed several significant findings. Specifically, hepatitis B core antibody (95% CI -0.237 to 0.061, <i>p</i> < 0.001), number of polyps (95% CI -0.214 to -0.052, <i>p</i> = 0.001), polyp size (95% CI 0.038 to 0.051, <i>p</i> < 0.001), wall thickness (95% CI 0.042 to 0.081, <i>p</i> < 0.001), and gallbladder size (95% CI 0.185 to 0.367, <i>p</i> < 0.001) emerged as independent predictors for gallbladder adenomatous polyps and malignant polyps. Based on these significant findings, we developed a predictive classification model for gallbladder polyps, represented as follows, Predictive classification model for GBPs = -0.149 * core antibody - 0.033 * number of polyps + 0.045 * polyp size + 0.061 * wall thickness + 0.276 * gallbladder size - 4.313. To assess the predictive efficiency of the model, we employed precision-recall (PR) and receiver operating characteristic (ROC) curves. The area under the curve (AUC) for the prediction model was 0.945 and 0.930, respectively, indicating excellent predictive capability. We determined that a polyp size of 10 mm served as the optimal cutoff value for diagnosing gallbladder adenoma, with a sensitivity of 81.5% and specificity of 60.0%. For the diagnosis of gallbladder cancer, the sensitivity and specificity were 81.5% and 92.5%, respectively. These findings highlight the potential of our predictive model and provide valuable insights into accurate diagnosis and risk assessment for gallbladder polyps. We identified several risk factors associated with the development of adenomatous and malignant polyps in the gallbladder","PeriodicalId":56051,"journal":{"name":"Computer Assisted Surgery","volume":"29 1","pages":"2331774"},"PeriodicalIF":2.1,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140195203","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-01Epub Date: 2024-03-11DOI: 10.1080/24699322.2024.2327981
Matteo Rossi, Gabriele Belotti, Luca Mainardi, Guido Baroni, Pietro Cerveri
Radiotherapy commonly utilizes cone beam computed tomography (CBCT) for patient positioning and treatment monitoring. CBCT is deemed to be secure for patients, making it suitable for the delivery of fractional doses. However, limitations such as a narrow field of view, beam hardening, scattered radiation artifacts, and variability in pixel intensity hinder the direct use of raw CBCT for dose recalculation during treatment. To address this issue, reliable correction techniques are necessary to remove artifacts and remap pixel intensity into Hounsfield Units (HU) values. This study proposes a deep-learning framework for calibrating CBCT images acquired with narrow field of view (FOV) systems and demonstrates its potential use in proton treatment planning updates. Cycle-consistent generative adversarial networks (cGAN) processes raw CBCT to reduce scatter and remap HU. Monte Carlo simulation is used to generate CBCT scans, enabling the possibility to focus solely on the algorithm's ability to reduce artifacts and cupping effects without considering intra-patient longitudinal variability and producing a fair comparison between planning CT (pCT) and calibrated CBCT dosimetry. To showcase the viability of the approach using real-world data, experiments were also conducted using real CBCT. Tests were performed on a publicly available dataset of 40 patients who received ablative radiation therapy for pancreatic cancer. The simulated CBCT calibration led to a difference in proton dosimetry of less than 2%, compared to the planning CT. The potential toxicity effect on the organs at risk decreased from about 50% (uncalibrated) up the 2% (calibrated). The gamma pass rate at 3%/2 mm produced an improvement of about 37% in replicating the prescribed dose before and after calibration (53.78% vs 90.26%). Real data also confirmed this with slightly inferior performances for the same criteria (65.36% vs 87.20%). These results may confirm that generative artificial intelligence brings the use of narrow FOV CBCT scans incrementally closer to clinical translation in proton therapy planning updates.
{"title":"Feasibility of proton dosimetry overriding planning CT with daily CBCT elaborated through generative artificial intelligence tools.","authors":"Matteo Rossi, Gabriele Belotti, Luca Mainardi, Guido Baroni, Pietro Cerveri","doi":"10.1080/24699322.2024.2327981","DOIUrl":"10.1080/24699322.2024.2327981","url":null,"abstract":"<p><p>Radiotherapy commonly utilizes cone beam computed tomography (CBCT) for patient positioning and treatment monitoring. CBCT is deemed to be secure for patients, making it suitable for the delivery of fractional doses. However, limitations such as a narrow field of view, beam hardening, scattered radiation artifacts, and variability in pixel intensity hinder the direct use of raw CBCT for dose recalculation during treatment. To address this issue, reliable correction techniques are necessary to remove artifacts and remap pixel intensity into Hounsfield Units (HU) values. This study proposes a deep-learning framework for calibrating CBCT images acquired with narrow field of view (FOV) systems and demonstrates its potential use in proton treatment planning updates. Cycle-consistent generative adversarial networks (cGAN) processes raw CBCT to reduce scatter and remap HU. Monte Carlo simulation is used to generate CBCT scans, enabling the possibility to focus solely on the algorithm's ability to reduce artifacts and cupping effects without considering intra-patient longitudinal variability and producing a fair comparison between planning CT (pCT) and calibrated CBCT dosimetry. To showcase the viability of the approach using real-world data, experiments were also conducted using real CBCT. Tests were performed on a publicly available dataset of 40 patients who received ablative radiation therapy for pancreatic cancer. The simulated CBCT calibration led to a difference in proton dosimetry of less than 2%, compared to the planning CT. The potential toxicity effect on the organs at risk decreased from about 50% (uncalibrated) up the 2% (calibrated). The gamma pass rate at 3%/2 mm produced an improvement of about 37% in replicating the prescribed dose before and after calibration (53.78% vs 90.26%). Real data also confirmed this with slightly inferior performances for the same criteria (65.36% vs 87.20%). These results may confirm that generative artificial intelligence brings the use of narrow FOV CBCT scans incrementally closer to clinical translation in proton therapy planning updates.</p>","PeriodicalId":56051,"journal":{"name":"Computer Assisted Surgery","volume":"29 1","pages":"2327981"},"PeriodicalIF":2.1,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140102858","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-01Epub Date: 2024-05-24DOI: 10.1080/24699322.2024.2355897
Zahra Asadi, Mehrdad Asadi, Negar Kazemipour, Étienne Léger, Marta Kersten-Oertel
Advancements in mixed reality (MR) have led to innovative approaches in image-guided surgery (IGS). In this paper, we provide a comprehensive analysis of the current state of MR in image-guided procedures across various surgical domains. Using the Data Visualization View (DVV) Taxonomy, we analyze the progress made since a 2013 literature review paper on MR IGS systems. In addition to examining the current surgical domains using MR systems, we explore trends in types of MR hardware used, type of data visualized, visualizations of virtual elements, and interaction methods in use. Our analysis also covers the metrics used to evaluate these systems in the operating room (OR), both qualitative and quantitative assessments, and clinical studies that have demonstrated the potential of MR technologies to enhance surgical workflows and outcomes. We also address current challenges and future directions that would further establish the use of MR in IGS.
{"title":"A decade of progress: bringing mixed reality image-guided surgery systems in the operating room.","authors":"Zahra Asadi, Mehrdad Asadi, Negar Kazemipour, Étienne Léger, Marta Kersten-Oertel","doi":"10.1080/24699322.2024.2355897","DOIUrl":"https://doi.org/10.1080/24699322.2024.2355897","url":null,"abstract":"<p><p>Advancements in mixed reality (MR) have led to innovative approaches in image-guided surgery (IGS). In this paper, we provide a comprehensive analysis of the current state of MR in image-guided procedures across various surgical domains. Using the Data Visualization View (DVV) Taxonomy, we analyze the progress made since a 2013 literature review paper on MR IGS systems. In addition to examining the current surgical domains using MR systems, we explore trends in types of MR hardware used, type of data visualized, visualizations of virtual elements, and interaction methods in use. Our analysis also covers the metrics used to evaluate these systems in the operating room (OR), both qualitative and quantitative assessments, and clinical studies that have demonstrated the potential of MR technologies to enhance surgical workflows and outcomes. We also address current challenges and future directions that would further establish the use of MR in IGS.</p>","PeriodicalId":56051,"journal":{"name":"Computer Assisted Surgery","volume":"29 1","pages":"2355897"},"PeriodicalIF":2.1,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141094751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The real-time requirement for image segmentation in laparoscopic surgical assistance systems is extremely high. Although traditional deep learning models can ensure high segmentation accuracy, they suffer from a large computational burden. In the practical setting of most hospitals, where powerful computing resources are lacking, these models cannot meet the real-time computational demands. We propose a novel network SwinD-Net based on Skip connections, incorporating Depthwise separable convolutions and Swin Transformer Blocks. To reduce computational overhead, we eliminate the skip connection in the first layer and reduce the number of channels in shallow feature maps. Additionally, we introduce Swin Transformer Blocks, which have a larger computational and parameter footprint, to extract global information and capture high-level semantic features. Through these modifications, our network achieves desirable performance while maintaining a lightweight design. We conduct experiments on the CholecSeg8k dataset to validate the effectiveness of our approach. Compared to other models, our approach achieves high accuracy while significantly reducing computational and parameter overhead. Specifically, our model requires only 98.82 M floating-point operations (FLOPs) and 0.52 M parameters, with an inference time of 47.49 ms per image on a CPU. Compared to the recently proposed lightweight segmentation network UNeXt, our model not only outperforms it in terms of the Dice metric but also has only 1/3 of the parameters and 1/22 of the FLOPs. In addition, our model achieves a 2.4 times faster inference speed than UNeXt, demonstrating comprehensive improvements in both accuracy and speed. Our model effectively reduces parameter count and computational complexity, improving the inference speed while maintaining comparable accuracy. The source code will be available at https://github.com/ouyangshuiming/SwinDNet.
{"title":"SwinD-Net: a lightweight segmentation network for laparoscopic liver segmentation.","authors":"Shuiming Ouyang, Baochun He, Huoling Luo, Fucang Jia","doi":"10.1080/24699322.2024.2329675","DOIUrl":"10.1080/24699322.2024.2329675","url":null,"abstract":"<p><p>The real-time requirement for image segmentation in laparoscopic surgical assistance systems is extremely high. Although traditional deep learning models can ensure high segmentation accuracy, they suffer from a large computational burden. In the practical setting of most hospitals, where powerful computing resources are lacking, these models cannot meet the real-time computational demands. We propose a novel network SwinD-Net based on Skip connections, incorporating Depthwise separable convolutions and Swin Transformer Blocks. To reduce computational overhead, we eliminate the skip connection in the first layer and reduce the number of channels in shallow feature maps. Additionally, we introduce Swin Transformer Blocks, which have a larger computational and parameter footprint, to extract global information and capture high-level semantic features. Through these modifications, our network achieves desirable performance while maintaining a lightweight design. We conduct experiments on the CholecSeg8k dataset to validate the effectiveness of our approach. Compared to other models, our approach achieves high accuracy while significantly reducing computational and parameter overhead. Specifically, our model requires only 98.82 M floating-point operations (FLOPs) and 0.52 M parameters, with an inference time of 47.49 ms per image on a CPU. Compared to the recently proposed lightweight segmentation network UNeXt, our model not only outperforms it in terms of the Dice metric but also has only 1/3 of the parameters and 1/22 of the FLOPs. In addition, our model achieves a 2.4 times faster inference speed than UNeXt, demonstrating comprehensive improvements in both accuracy and speed. Our model effectively reduces parameter count and computational complexity, improving the inference speed while maintaining comparable accuracy. The source code will be available at https://github.com/ouyangshuiming/SwinDNet.</p>","PeriodicalId":56051,"journal":{"name":"Computer Assisted Surgery","volume":"29 1","pages":"2329675"},"PeriodicalIF":2.1,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140177886","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-01Epub Date: 2024-01-23DOI: 10.1080/24699322.2023.2276055
Benjamin Hohlmann, Peter Broessner, Klaus Radermacher
Computer-assisted orthopedic surgery requires precise representations of bone surfaces. To date, computed tomography constitutes the gold standard, but comes with a number of limitations, including costs, radiation and availability. Ultrasound has potential to become an alternative to computed tomography, yet suffers from low image quality and limited field-of-view. These shortcomings may be addressed by a fully automatic segmentation and model-based completion of 3D bone surfaces from ultrasound images. This survey summarizes the state-of-the-art in this field by introducing employed algorithms, and determining challenges and trends. For segmentation, a clear trend toward machine learning-based algorithms can be observed. For 3D bone model completion however, none of the published methods involve machine learning. Furthermore, data sets and metrics are identified as weak spots in current research, preventing development and evaluation of models that generalize well.
{"title":"Ultrasound-based 3D bone modelling in computer assisted orthopedic surgery - a review and future challenges.","authors":"Benjamin Hohlmann, Peter Broessner, Klaus Radermacher","doi":"10.1080/24699322.2023.2276055","DOIUrl":"10.1080/24699322.2023.2276055","url":null,"abstract":"<p><p>Computer-assisted orthopedic surgery requires precise representations of bone surfaces. To date, computed tomography constitutes the gold standard, but comes with a number of limitations, including costs, radiation and availability. Ultrasound has potential to become an alternative to computed tomography, yet suffers from low image quality and limited field-of-view. These shortcomings may be addressed by a fully automatic segmentation and model-based completion of 3D bone surfaces from ultrasound images. This survey summarizes the state-of-the-art in this field by introducing employed algorithms, and determining challenges and trends. For segmentation, a clear trend toward machine learning-based algorithms can be observed. For 3D bone model completion however, none of the published methods involve machine learning. Furthermore, data sets and metrics are identified as weak spots in current research, preventing development and evaluation of models that generalize well.</p>","PeriodicalId":56051,"journal":{"name":"Computer Assisted Surgery","volume":"29 1","pages":"2276055"},"PeriodicalIF":2.1,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139543506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
<p><strong>Background: </strong>Machine learning (ML), a subset of artificial intelligence (AI), uses algorithms to analyze data and predict outcomes without extensive human intervention. In healthcare, ML is gaining attention for enhancing patient outcomes. This study focuses on predicting additional hospital days (AHD) for patients with cervical spondylosis (CS), a condition affecting the cervical spine. The research aims to develop an ML-based nomogram model analyzing clinical and demographic factors to estimate hospital length of stay (LOS). Accurate AHD predictions enable efficient resource allocation, improved patient care, and potential cost reduction in healthcare.</p><p><strong>Methods: </strong>The study selected CS patients undergoing cervical spine surgery and investigated their medical data. A total of 945 patients were recruited, with 570 males and 375 females. The mean number of LOS calculated for the total sample was 8.64 ± 3.7 days. A LOS equal to or <8.64 days was categorized as the AHD-negative group (<i>n</i> = 539), and a LOS > 8.64 days comprised the AHD-positive group (<i>n</i> = 406). The collected data was randomly divided into training and validation cohorts using a 7:3 ratio. The parameters included their general conditions, chronic diseases, preoperative clinical scores, and preoperative radiographic data including ossification of the anterior longitudinal ligament (OALL), ossification of the posterior longitudinal ligament (OPLL), cervical instability and magnetic resonance imaging T2-weighted imaging high signal (MRI T2WIHS), operative indicators and complications. ML-based models like Lasso regression, random forest (RF), and support vector machine (SVM) recursive feature elimination (SVM-RFE) were developed for predicting AHD-related risk factors. The intersections of the variables screened by the aforementioned algorithms were utilized to construct a nomogram model for predicting AHD in patients. The area under the curve (AUC) of the receiver operating characteristic (ROC) curve and C-index were used to evaluate the performance of the nomogram. Calibration curve and decision curve analysis (DCA) were performed to test the calibration performance and clinical utility.</p><p><strong>Results: </strong>For these participants, 25 statistically significant parameters were identified as risk factors for AHD. Among these, nine factors were obtained as the intersection factors of these three ML algorithms and were used to develop a nomogram model. These factors were gender, age, body mass index (BMI), American Spinal Injury Association (ASIA) scores, magnetic resonance imaging T2-weighted imaging high signal (MRI T2WIHS), operated segment, intraoperative bleeding volume, the volume of drainage, and diabetes. After model validation, the AUC was 0.753 in the training cohort and 0.777 in the validation cohort. The calibration curve exhibited a satisfactory agreement between the nomogram predictions and actual probabilities. T
背景:机器学习(ML)是人工智能(AI)的一个分支,它使用算法分析数据并预测结果,无需大量人工干预。在医疗保健领域,ML 在提高患者预后方面的作用越来越受到关注。本研究的重点是预测颈椎病(CS)患者的额外住院日(AHD),颈椎病是一种影响颈椎的疾病。研究旨在开发一种基于 ML 的提名图模型,通过分析临床和人口统计因素来估算住院时间(LOS)。准确的住院时间预测可实现有效的资源分配、改善患者护理并降低医疗成本:研究选择了接受颈椎手术的 CS 患者,并调查了他们的医疗数据。共招募了 945 名患者,其中男性 570 名,女性 375 名。所有样本的平均住院日为 8.64±3.7 天。LOS 等于或 n = 539)和 LOS > 8.64 天的患者组成 AHD 阳性组(n = 406)。收集到的数据按 7:3 的比例随机分为训练组和验证组。参数包括患者的一般情况、慢性疾病、术前临床评分、术前影像学数据,包括前纵韧带骨化(OALL)、后纵韧带骨化(OPLL)、颈椎不稳和磁共振成像 T2 加权成像高信号(MRI T2WIHS)、手术指标和并发症。研究人员开发了基于 ML 的模型,如 Lasso 回归、随机森林(RF)和支持向量机(SVM)递归特征消除(SVM-RFE),用于预测与 AHD 相关的风险因素。利用上述算法筛选出的变量的交叉点构建了预测患者急性心肌梗死的提名图模型。接受者操作特征曲线(ROC)的曲线下面积(AUC)和 C 指数用于评估提名图的性能。校准曲线和决策曲线分析(DCA)用于测试校准性能和临床实用性:结果:在这些参与者中,有 25 个具有统计学意义的参数被确定为急性心肌缺血风险因素。其中,有九个因素是这三种 ML 算法的交叉因素,并被用于建立一个提名图模型。这些因素包括性别、年龄、体重指数(BMI)、美国脊柱损伤协会(ASIA)评分、磁共振成像 T2 加权成像高信号(MRI T2WIHS)、手术区段、术中出血量、引流量和糖尿病。模型验证后,训练队列的 AUC 为 0.753,验证队列的 AUC 为 0.777。校准曲线显示,提名图预测与实际概率之间的一致性令人满意。C 指数为 0.788(95% 置信区间:0.73214-0.84386)。在决策曲线分析(DCA)中,提名图的阈值概率范围为 1%至 99%(训练队列)和 1%至 75%(验证队列):我们成功建立了一个用于预测颈椎手术患者 AHD 的 ML 模型,展示了该模型在支持临床医生识别 AHD 和改进围手术期治疗策略方面的潜力。
{"title":"Prediction of additional hospital days in patients undergoing cervical spine surgery with machine learning methods.","authors":"Bin Zhang, Shengsheng Huang, Chenxing Zhou, Jichong Zhu, Tianyou Chen, Sitan Feng, Chengqian Huang, Zequn Wang, Shaofeng Wu, Chong Liu, Xinli Zhan","doi":"10.1080/24699322.2024.2345066","DOIUrl":"https://doi.org/10.1080/24699322.2024.2345066","url":null,"abstract":"<p><strong>Background: </strong>Machine learning (ML), a subset of artificial intelligence (AI), uses algorithms to analyze data and predict outcomes without extensive human intervention. In healthcare, ML is gaining attention for enhancing patient outcomes. This study focuses on predicting additional hospital days (AHD) for patients with cervical spondylosis (CS), a condition affecting the cervical spine. The research aims to develop an ML-based nomogram model analyzing clinical and demographic factors to estimate hospital length of stay (LOS). Accurate AHD predictions enable efficient resource allocation, improved patient care, and potential cost reduction in healthcare.</p><p><strong>Methods: </strong>The study selected CS patients undergoing cervical spine surgery and investigated their medical data. A total of 945 patients were recruited, with 570 males and 375 females. The mean number of LOS calculated for the total sample was 8.64 ± 3.7 days. A LOS equal to or <8.64 days was categorized as the AHD-negative group (<i>n</i> = 539), and a LOS > 8.64 days comprised the AHD-positive group (<i>n</i> = 406). The collected data was randomly divided into training and validation cohorts using a 7:3 ratio. The parameters included their general conditions, chronic diseases, preoperative clinical scores, and preoperative radiographic data including ossification of the anterior longitudinal ligament (OALL), ossification of the posterior longitudinal ligament (OPLL), cervical instability and magnetic resonance imaging T2-weighted imaging high signal (MRI T2WIHS), operative indicators and complications. ML-based models like Lasso regression, random forest (RF), and support vector machine (SVM) recursive feature elimination (SVM-RFE) were developed for predicting AHD-related risk factors. The intersections of the variables screened by the aforementioned algorithms were utilized to construct a nomogram model for predicting AHD in patients. The area under the curve (AUC) of the receiver operating characteristic (ROC) curve and C-index were used to evaluate the performance of the nomogram. Calibration curve and decision curve analysis (DCA) were performed to test the calibration performance and clinical utility.</p><p><strong>Results: </strong>For these participants, 25 statistically significant parameters were identified as risk factors for AHD. Among these, nine factors were obtained as the intersection factors of these three ML algorithms and were used to develop a nomogram model. These factors were gender, age, body mass index (BMI), American Spinal Injury Association (ASIA) scores, magnetic resonance imaging T2-weighted imaging high signal (MRI T2WIHS), operated segment, intraoperative bleeding volume, the volume of drainage, and diabetes. After model validation, the AUC was 0.753 in the training cohort and 0.777 in the validation cohort. The calibration curve exhibited a satisfactory agreement between the nomogram predictions and actual probabilities. T","PeriodicalId":56051,"journal":{"name":"Computer Assisted Surgery","volume":"29 1","pages":"2345066"},"PeriodicalIF":2.1,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141302103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-01Epub Date: 2024-02-05DOI: 10.1080/24699322.2024.2311940
Xinman Liu, Weiping Xiao, Yibing Yang, Yan Yan, Feng Liang
Objectives: We aimed to prospectively investigate the benefit of using augmented reality (AR) for surgery residents learning aneurysm surgery.
Materials and methods: Eight residents were included, and divided into an AR group and a control group (4 in each group). Both groups were asked to locate an aneurysm with a blue circle on the same screenshot after their viewing of surgery videos from both AR and non-AR tests. Only the AR group was allowed to inspect and manipulate an AR holographic representation of the aneurysm in AR tests. The actual location of the aneurysm was defined by a yellow circle by an attending physician after each test. Localization deviation was determined by the distance between the blue and yellow circle.
Results: Localization deviation was lower in the AR group than in the control group in the last 2 tests (AR Test 2: 2.7 ± 1.0 mm vs. 5.8 ± 4.1 mm, p = 0.01, non-AR Test 2: 2.1 ± 0.8 mm vs. 5.9 ± 5.8 mm, p < 0.001). The mean deviation was lower in non-AR Test 2 as compared to non-AR Test 1 in both groups (AR: p < 0.001, control: p = 0.391). The localization deviation of the AR group decreased from 8.1 ± 3.8 mm in Test 2 to 2.7 ± 1.0 mm in AR Test 2 (p < 0.001).
Conclusion: AR technology provides an effective and interactive way for neurosurgery training, and shortens the learning curve for residents in aneurysm surgery.
目的我们旨在对外科住院医师学习动脉瘤手术时使用增强现实技术(AR)的益处进行前瞻性研究:8 名住院医师被分为增强现实组和对照组(每组 4 人)。两组均被要求在观看完 AR 和非 AR 测试的手术视频后,在同一截图上用蓝色圆圈定位动脉瘤。在 AR 测试中,只有 AR 组被允许检查和操作动脉瘤的 AR 全息图像。每次测试后,主治医生都会用黄圈标出动脉瘤的实际位置。定位偏差由蓝色圆圈和黄色圆圈之间的距离决定:在最后 2 次测试中,AR 组的定位偏差低于对照组(AR 测试 2:2.7 ± 1.0 mm vs. 5.8 ± 4.1 mm,p = 0.01;非 AR 测试 2:2.1 ± 0.8 mm vs. 5.9 ± 5.8 mm,p = 0.391)。AR 组的定位偏差从测试 2 中的 8.1 ± 3.8 mm 下降到 AR 测试 2 中的 2.7 ± 1.0 mm(p 结论:AR 技术提供了一种有效的互动方式:AR 技术为神经外科培训提供了一种有效的互动方式,缩短了住院医师在动脉瘤手术方面的学习曲线。
{"title":"Augmented reality technology shortens aneurysm surgery learning curve for residents.","authors":"Xinman Liu, Weiping Xiao, Yibing Yang, Yan Yan, Feng Liang","doi":"10.1080/24699322.2024.2311940","DOIUrl":"10.1080/24699322.2024.2311940","url":null,"abstract":"<p><strong>Objectives: </strong>We aimed to prospectively investigate the benefit of using augmented reality (AR) for surgery residents learning aneurysm surgery.</p><p><strong>Materials and methods: </strong>Eight residents were included, and divided into an AR group and a control group (4 in each group). Both groups were asked to locate an aneurysm with a blue circle on the same screenshot after their viewing of surgery videos from both AR and non-AR tests. Only the AR group was allowed to inspect and manipulate an AR holographic representation of the aneurysm in AR tests. The actual location of the aneurysm was defined by a yellow circle by an attending physician after each test. Localization deviation was determined by the distance between the blue and yellow circle.</p><p><strong>Results: </strong>Localization deviation was lower in the AR group than in the control group in the last 2 tests (AR Test 2: 2.7 ± 1.0 mm vs. 5.8 ± 4.1 mm, <i>p</i> = 0.01, non-AR Test 2: 2.1 ± 0.8 mm vs. 5.9 ± 5.8 mm, <i>p</i> < 0.001). The mean deviation was lower in non-AR Test 2 as compared to non-AR Test 1 in both groups (AR: <i>p</i> < 0.001, control: <i>p</i> = 0.391). The localization deviation of the AR group decreased from 8.1 ± 3.8 mm in Test 2 to 2.7 ± 1.0 mm in AR Test 2 (<i>p</i> < 0.001).</p><p><strong>Conclusion: </strong>AR technology provides an effective and interactive way for neurosurgery training, and shortens the learning curve for residents in aneurysm surgery.</p>","PeriodicalId":56051,"journal":{"name":"Computer Assisted Surgery","volume":"29 1","pages":"2311940"},"PeriodicalIF":2.1,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139693639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-01Epub Date: 2024-09-10DOI: 10.1080/24699322.2024.2357164
Zheng Han, Qi Dou
Augmented Reality (AR) holds the potential to revolutionize surgical procedures by allowing surgeons to visualize critical structures within the patient's body. This is achieved through superimposing preoperative organ models onto the actual anatomy. Challenges arise from dynamic deformations of organs during surgery, making preoperative models inadequate for faithfully representing intraoperative anatomy. To enable reliable navigation in augmented surgery, modeling of intraoperative deformation to obtain an accurate alignment of the preoperative organ model with the intraoperative anatomy is indispensable. Despite the existence of various methods proposed to model intraoperative organ deformation, there are still few literature reviews that systematically categorize and summarize these approaches. This review aims to fill this gap by providing a comprehensive and technical-oriented overview of modeling methods for intraoperative organ deformation in augmented reality in surgery. Through a systematic search and screening process, 112 closely relevant papers were included in this review. By presenting the current status of organ deformation modeling methods and their clinical applications, this review seeks to enhance the understanding of organ deformation modeling in AR-guided surgery, and discuss the potential topics for future advancements.
增强现实技术(AR)可以让外科医生直观地看到病人体内的关键结构,从而有望彻底改变外科手术。这是通过将术前器官模型叠加到实际解剖结构上实现的。由于器官在手术过程中会发生动态变形,因此术前模型无法忠实再现术中解剖结构。为了在增强手术中实现可靠的导航,必须对术中变形进行建模,以获得术前器官模型与术中解剖结构的精确对齐。尽管术中器官变形建模的方法多种多样,但对这些方法进行系统分类和总结的文献综述仍然很少。本综述旨在通过对增强现实手术中术中器官变形的建模方法进行全面的、以技术为导向的概述来填补这一空白。通过系统的搜索和筛选过程,112 篇密切相关的论文被纳入本综述。通过介绍器官变形建模方法的现状及其临床应用,本综述旨在加深对 AR 引导手术中器官变形建模的理解,并讨论未来可能的发展主题。
{"title":"A review on organ deformation modeling approaches for reliable surgical navigation using augmented reality.","authors":"Zheng Han, Qi Dou","doi":"10.1080/24699322.2024.2357164","DOIUrl":"https://doi.org/10.1080/24699322.2024.2357164","url":null,"abstract":"<p><p>Augmented Reality (AR) holds the potential to revolutionize surgical procedures by allowing surgeons to visualize critical structures within the patient's body. This is achieved through superimposing preoperative organ models onto the actual anatomy. Challenges arise from dynamic deformations of organs during surgery, making preoperative models inadequate for faithfully representing intraoperative anatomy. To enable reliable navigation in augmented surgery, modeling of intraoperative deformation to obtain an accurate alignment of the preoperative organ model with the intraoperative anatomy is indispensable. Despite the existence of various methods proposed to model intraoperative organ deformation, there are still few literature reviews that systematically categorize and summarize these approaches. This review aims to fill this gap by providing a comprehensive and technical-oriented overview of modeling methods for intraoperative organ deformation in augmented reality in surgery. Through a systematic search and screening process, 112 closely relevant papers were included in this review. By presenting the current status of organ deformation modeling methods and their clinical applications, this review seeks to enhance the understanding of organ deformation modeling in AR-guided surgery, and discuss the potential topics for future advancements.</p>","PeriodicalId":56051,"journal":{"name":"Computer Assisted Surgery","volume":"29 1","pages":"2357164"},"PeriodicalIF":1.5,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142301863","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-01Epub Date: 2024-09-21DOI: 10.1080/24699322.2024.2404695
Subin Lee, Hyeonwook Kim, Jaehyeon Byeon, Seongbo Shim, Hyun-Joo Lee, Jaesung Hong
A robotic system for manipulating a flexible endoscope in surgery can provide enhanced accuracy and usability compared to manual operation. However, previous studies require large-scale, complex hardware systems to implement the rotational and translational motions of the soft endoscope cable. The conventional control of the endoscope by actuating the endoscope handle also leads to undesired slack between the endoscope tip and the handle, which becomes more problematic with long endoscopes such as a colonoscope. This study proposes a compact quad-roller friction mechanism that enables rotational and translational motions triggered not from the endoscope handle but at the endoscope tip. Controlling two pairs of tilted rollers achieves both types of motion within a small space. The proposed system also introduces an unsynchronized motion strategy between the handle and tip parts to minimize the robot's motion near the patient by employing the slack positively as a control index. Experiments indicate that the proposed system achieves accurate rotational and translational motions, and the unsynchronized control method reduces the total translational motion by up to 88% compared to the previous method.
{"title":"Flexible endoscope manipulating robot using quad-roller friction mechanism.","authors":"Subin Lee, Hyeonwook Kim, Jaehyeon Byeon, Seongbo Shim, Hyun-Joo Lee, Jaesung Hong","doi":"10.1080/24699322.2024.2404695","DOIUrl":"https://doi.org/10.1080/24699322.2024.2404695","url":null,"abstract":"<p><p>A robotic system for manipulating a flexible endoscope in surgery can provide enhanced accuracy and usability compared to manual operation. However, previous studies require large-scale, complex hardware systems to implement the rotational and translational motions of the soft endoscope cable. The conventional control of the endoscope by actuating the endoscope handle also leads to undesired slack between the endoscope tip and the handle, which becomes more problematic with long endoscopes such as a colonoscope. This study proposes a compact quad-roller friction mechanism that enables rotational and translational motions triggered not from the endoscope handle but at the endoscope tip. Controlling two pairs of tilted rollers achieves both types of motion within a small space. The proposed system also introduces an unsynchronized motion strategy between the handle and tip parts to minimize the robot's motion near the patient by employing the slack positively as a control index. Experiments indicate that the proposed system achieves accurate rotational and translational motions, and the unsynchronized control method reduces the total translational motion by up to 88% compared to the previous method.</p>","PeriodicalId":56051,"journal":{"name":"Computer Assisted Surgery","volume":"29 1","pages":"2404695"},"PeriodicalIF":1.5,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142301864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}