首页 > 最新文献

International Journal of Computer Assisted Radiology and Surgery最新文献

英文 中文
3D mobile regression vision transformer for collateral imaging in acute ischemic stroke. 用于急性缺血性中风侧支成像的三维移动回归视觉转换器。
IF 2.3 3区 医学 Q3 ENGINEERING, BIOMEDICAL Pub Date : 2024-10-01 Epub Date: 2024-07-13 DOI: 10.1007/s11548-024-03229-5
Sumin Jung, Hyun Yang, Hyun Jeong Kim, Hong Gee Roh, Jin Tae Kwak

Purpose: The accurate and timely assessment of the collateral perfusion status is crucial in the diagnosis and treatment of patients with acute ischemic stroke. Previous works have shown that collateral imaging, derived from CT angiography, MR perfusion, and MR angiography, aids in evaluating the collateral status. However, such methods are time-consuming and/or sub-optimal due to the nature of manual processing and heuristics. Recently, deep learning approaches have shown to be promising for generating collateral imaging. These, however, suffer from the computational complexity and cost.

Methods: In this study, we propose a mobile, lightweight deep regression neural network for collateral imaging in acute ischemic stroke, leveraging dynamic susceptibility contrast MR perfusion (DSC-MRP). Built based upon lightweight convolution and Transformer architectures, the proposed model manages the balance between the model complexity and performance.

Results: We evaluated the performance of the proposed model in generating the five-phase collateral maps, including arterial, capillary, early venous, late venous, and delayed phases, using DSC-MRP from 952 patients. In comparison with various deep learning models, the proposed method was superior to the competitors with similar complexity and was comparable to the competitors of high complexity.

Conclusion: The results suggest that the proposed model is able to facilitate rapid and precise assessment of the collateral status of patients with acute ischemic stroke, leading to improved patient care and outcome.

目的:准确及时地评估侧支灌注状况对急性缺血性卒中患者的诊断和治疗至关重要。以往的研究表明,通过 CT 血管造影、磁共振灌注和磁共振血管造影获得的侧支成像有助于评估侧支状态。然而,由于人工处理和启发式方法的性质,这些方法耗时较长和/或不够理想。最近,深度学习方法在生成侧支成像方面大有可为。然而,这些方法存在计算复杂性和成本问题:在这项研究中,我们提出了一种用于急性缺血性脑卒中侧支成像的移动式轻量级深度回归神经网络,利用动态感性对比 MR 灌注(DSC-MRP)。该模型基于轻量级卷积和变换器架构,在模型复杂性和性能之间取得了平衡:我们利用 952 名患者的 DSC-MRP 评估了拟议模型在生成动脉、毛细血管、早期静脉、晚期静脉和延迟期等五期侧支图时的性能。与各种深度学习模型相比,所提出的方法优于复杂度相似的竞争对手,与复杂度高的竞争对手不相上下:结果表明,所提出的模型能够促进对急性缺血性中风患者侧支状态的快速、精确评估,从而改善患者护理和预后。
{"title":"3D mobile regression vision transformer for collateral imaging in acute ischemic stroke.","authors":"Sumin Jung, Hyun Yang, Hyun Jeong Kim, Hong Gee Roh, Jin Tae Kwak","doi":"10.1007/s11548-024-03229-5","DOIUrl":"10.1007/s11548-024-03229-5","url":null,"abstract":"<p><strong>Purpose: </strong>The accurate and timely assessment of the collateral perfusion status is crucial in the diagnosis and treatment of patients with acute ischemic stroke. Previous works have shown that collateral imaging, derived from CT angiography, MR perfusion, and MR angiography, aids in evaluating the collateral status. However, such methods are time-consuming and/or sub-optimal due to the nature of manual processing and heuristics. Recently, deep learning approaches have shown to be promising for generating collateral imaging. These, however, suffer from the computational complexity and cost.</p><p><strong>Methods: </strong>In this study, we propose a mobile, lightweight deep regression neural network for collateral imaging in acute ischemic stroke, leveraging dynamic susceptibility contrast MR perfusion (DSC-MRP). Built based upon lightweight convolution and Transformer architectures, the proposed model manages the balance between the model complexity and performance.</p><p><strong>Results: </strong>We evaluated the performance of the proposed model in generating the five-phase collateral maps, including arterial, capillary, early venous, late venous, and delayed phases, using DSC-MRP from 952 patients. In comparison with various deep learning models, the proposed method was superior to the competitors with similar complexity and was comparable to the competitors of high complexity.</p><p><strong>Conclusion: </strong>The results suggest that the proposed model is able to facilitate rapid and precise assessment of the collateral status of patients with acute ischemic stroke, leading to improved patient care and outcome.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"2043-2054"},"PeriodicalIF":2.3,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11442547/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141604435","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detection of pulmonary nodules in chest radiographs: novel cost function for effective network training with purely synthesized datasets. 胸片中肺部结节的检测:利用纯合成数据集进行有效网络训练的新型成本函数。
IF 2.3 3区 医学 Q3 ENGINEERING, BIOMEDICAL Pub Date : 2024-10-01 Epub Date: 2024-07-13 DOI: 10.1007/s11548-024-03227-7
Shouhei Hanaoka, Yukihiro Nomura, Takeharu Yoshikawa, Takahiro Nakao, Tomomi Takenaga, Hirotaka Matsuzaki, Nobutake Yamamichi, Osamu Abe

Purpose: Many large radiographic datasets of lung nodules are available, but the small and hard-to-detect nodules are rarely validated by computed tomography. Such difficult nodules are crucial for training nodule detection methods. This lack of difficult nodules for training can be addressed by artificial nodule synthesis algorithms, which can create artificially embedded nodules. This study aimed to develop and evaluate a novel cost function for training networks to detect such lesions. Embedding artificial lesions in healthy medical images is effective when positive cases are insufficient for network training. Although this approach provides both positive (lesion-embedded) images and the corresponding negative (lesion-free) images, no known methods effectively use these pairs for training. This paper presents a novel cost function for segmentation-based detection networks when positive-negative pairs are available.

Methods: Based on the classic U-Net, new terms were added to the original Dice loss for reducing false positives and the contrastive learning of diseased regions in the image pairs. The experimental network was trained and evaluated, respectively, on 131,072 fully synthesized pairs of images simulating lung cancer and real chest X-ray images from the Japanese Society of Radiological Technology dataset.

Results: The proposed method outperformed RetinaNet and a single-shot multibox detector. The sensitivities were 0.688 and 0.507 when the number of false positives per image was 0.2, respectively, with and without fine-tuning under the leave-one-case-out setting.

Conclusion: To our knowledge, this is the first study in which a method for detecting pulmonary nodules in chest X-ray images was evaluated on a real clinical dataset after being trained on fully synthesized images. The synthesized dataset is available at https://zenodo.org/records/10648433 .

目的:目前有许多肺结节的大型影像数据集,但通过计算机断层扫描验证的小结节和难以检测的结节却很少。这些难以检测的结节对于训练结节检测方法至关重要。人工结节合成算法可以创建人工嵌入的结节,从而解决缺乏困难结节进行训练的问题。本研究旨在开发和评估一种新的成本函数,用于训练检测此类病变的网络。在健康医学影像中嵌入人工病灶,在阳性病例不足以进行网络训练时非常有效。虽然这种方法能提供阳性(嵌入病灶)图像和相应的阴性(无病灶)图像,但目前还没有已知的方法能有效地利用这些图像对进行训练。本文提出了一种新的成本函数,用于在有正负图像对的情况下建立基于分割的检测网络:方法:以经典的 U-Net 为基础,在原有的 Dice 损失中添加了新的项,以减少假阳性和对比学习图像对中的病变区域。实验网络分别在 131,072 对完全合成的模拟肺癌图像和日本放射技术学会数据集中的真实胸部 X 光图像上进行了训练和评估:结果:所提出的方法优于 RetinaNet 和单发多箱检测器。结论:据我们所知,这是在 "leave-one-case-out "设置下进行微调和不进行微调的情况下,当每幅图像的假阳性数量为 0.2 时,灵敏度分别为 0.688 和 0.507:据我们所知,这是第一项在真实临床数据集上对胸部 X 光图像中肺部结节的检测方法进行评估的研究。合成数据集可在 https://zenodo.org/records/10648433 上获取。
{"title":"Detection of pulmonary nodules in chest radiographs: novel cost function for effective network training with purely synthesized datasets.","authors":"Shouhei Hanaoka, Yukihiro Nomura, Takeharu Yoshikawa, Takahiro Nakao, Tomomi Takenaga, Hirotaka Matsuzaki, Nobutake Yamamichi, Osamu Abe","doi":"10.1007/s11548-024-03227-7","DOIUrl":"10.1007/s11548-024-03227-7","url":null,"abstract":"<p><strong>Purpose: </strong>Many large radiographic datasets of lung nodules are available, but the small and hard-to-detect nodules are rarely validated by computed tomography. Such difficult nodules are crucial for training nodule detection methods. This lack of difficult nodules for training can be addressed by artificial nodule synthesis algorithms, which can create artificially embedded nodules. This study aimed to develop and evaluate a novel cost function for training networks to detect such lesions. Embedding artificial lesions in healthy medical images is effective when positive cases are insufficient for network training. Although this approach provides both positive (lesion-embedded) images and the corresponding negative (lesion-free) images, no known methods effectively use these pairs for training. This paper presents a novel cost function for segmentation-based detection networks when positive-negative pairs are available.</p><p><strong>Methods: </strong>Based on the classic U-Net, new terms were added to the original Dice loss for reducing false positives and the contrastive learning of diseased regions in the image pairs. The experimental network was trained and evaluated, respectively, on 131,072 fully synthesized pairs of images simulating lung cancer and real chest X-ray images from the Japanese Society of Radiological Technology dataset.</p><p><strong>Results: </strong>The proposed method outperformed RetinaNet and a single-shot multibox detector. The sensitivities were 0.688 and 0.507 when the number of false positives per image was 0.2, respectively, with and without fine-tuning under the leave-one-case-out setting.</p><p><strong>Conclusion: </strong>To our knowledge, this is the first study in which a method for detecting pulmonary nodules in chest X-ray images was evaluated on a real clinical dataset after being trained on fully synthesized images. The synthesized dataset is available at https://zenodo.org/records/10648433 .</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"1991-2000"},"PeriodicalIF":2.3,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11442563/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141604461","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A microdiscectomy surgical video annotation framework for supervised machine learning applications. 用于监督机器学习应用的显微切除手术视频标注框架。
IF 2.3 3区 医学 Q3 ENGINEERING, BIOMEDICAL Pub Date : 2024-10-01 Epub Date: 2024-07-19 DOI: 10.1007/s11548-024-03203-1
Kochai Jan Jawed, Ian Buchanan, Kevin Cleary, Elizabeth Fischer, Aaron Mun, Nishanth Gowda, Arhum Naeem, Recai Yilmaz, Daniel A Donoho

Purpose: Lumbar discectomy is among the most common spine procedures in the US, with 300,000 procedures performed each year. Like other surgical procedures, this procedure is not excluded from potential complications. This paper presents a video annotation methodology for microdiscectomy including the development of a surgical workflow. In future work, this methodology could be combined with computer vision and machine learning models to predict potential adverse events. These systems would monitor the intraoperative activities and possibly anticipate the outcomes.

Methods: A necessary step in supervised machine learning methods is video annotation, which involves labeling objects frame-by-frame to make them recognizable for machine learning applications. Microdiscectomy video recordings of spine surgeries were collected from a multi-center research collaborative. These videos were anonymized and stored in a cloud-based platform. Videos were uploaded to an online annotation platform. An annotation framework was developed based on literature review and surgical observations to ensure proper understanding of the instruments, anatomy, and steps.

Results: An annotated video of microdiscectomy was produced by a single surgeon. Multiple iterations allowed for the creation of an annotated video complete with labeled surgical tools, anatomy, and phases. In addition, a workflow was developed for the training of novice annotators, which provides information about the annotation software to assist in the production of standardized annotations.

Conclusions: A standardized workflow for managing surgical video data is essential for surgical video annotation and machine learning applications. We developed a standard workflow for annotating surgical videos for microdiscectomy that may facilitate the quantitative analysis of videos using supervised machine learning applications. Future work will demonstrate the clinical relevance and impact of this workflow by developing process modeling and outcome predictors.

目的:腰椎间盘切除术是美国最常见的脊柱手术之一,每年进行 30 万例手术。与其他外科手术一样,这种手术也不排除潜在的并发症。本文介绍了显微椎间盘切除术的视频标注方法,包括手术工作流程的开发。在未来的工作中,这种方法可与计算机视觉和机器学习模型相结合,以预测潜在的不良事件。这些系统将监控术中活动,并可能预测结果:有监督的机器学习方法的一个必要步骤是视频标注,这包括逐帧标注对象,使其能够为机器学习应用所识别。我们从一个多中心研究合作机构收集了脊柱手术的显微切除术视频记录。这些视频经过匿名处理并存储在一个云平台中。视频被上传到一个在线注释平台。根据文献回顾和手术观察制定了注释框架,以确保正确理解器械、解剖和步骤:结果:一位外科医生制作了一段带有注释的显微椎间盘切除术视频。经过多次迭代,制作出了一段带有注释的视频,并标注了手术工具、解剖结构和阶段。此外,还开发了一套工作流程,用于培训新手注释员,其中提供了注释软件的相关信息,以协助制作标准化注释:管理手术视频数据的标准化工作流程对于手术视频标注和机器学习应用至关重要。我们为显微椎间盘切除术的手术视频标注开发了一套标准工作流程,可促进使用有监督的机器学习应用对视频进行定量分析。未来的工作将通过开发过程建模和结果预测器来证明该工作流程的临床相关性和影响。
{"title":"A microdiscectomy surgical video annotation framework for supervised machine learning applications.","authors":"Kochai Jan Jawed, Ian Buchanan, Kevin Cleary, Elizabeth Fischer, Aaron Mun, Nishanth Gowda, Arhum Naeem, Recai Yilmaz, Daniel A Donoho","doi":"10.1007/s11548-024-03203-1","DOIUrl":"10.1007/s11548-024-03203-1","url":null,"abstract":"<p><strong>Purpose: </strong>Lumbar discectomy is among the most common spine procedures in the US, with 300,000 procedures performed each year. Like other surgical procedures, this procedure is not excluded from potential complications. This paper presents a video annotation methodology for microdiscectomy including the development of a surgical workflow. In future work, this methodology could be combined with computer vision and machine learning models to predict potential adverse events. These systems would monitor the intraoperative activities and possibly anticipate the outcomes.</p><p><strong>Methods: </strong>A necessary step in supervised machine learning methods is video annotation, which involves labeling objects frame-by-frame to make them recognizable for machine learning applications. Microdiscectomy video recordings of spine surgeries were collected from a multi-center research collaborative. These videos were anonymized and stored in a cloud-based platform. Videos were uploaded to an online annotation platform. An annotation framework was developed based on literature review and surgical observations to ensure proper understanding of the instruments, anatomy, and steps.</p><p><strong>Results: </strong>An annotated video of microdiscectomy was produced by a single surgeon. Multiple iterations allowed for the creation of an annotated video complete with labeled surgical tools, anatomy, and phases. In addition, a workflow was developed for the training of novice annotators, which provides information about the annotation software to assist in the production of standardized annotations.</p><p><strong>Conclusions: </strong>A standardized workflow for managing surgical video data is essential for surgical video annotation and machine learning applications. We developed a standard workflow for annotating surgical videos for microdiscectomy that may facilitate the quantitative analysis of videos using supervised machine learning applications. Future work will demonstrate the clinical relevance and impact of this workflow by developing process modeling and outcome predictors.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"1947-1952"},"PeriodicalIF":2.3,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141724987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Real-time prediction of postoperative spinal shape with machine learning models trained on finite element biomechanical simulations. 利用有限元生物力学模拟训练的机器学习模型实时预测术后脊柱形状。
IF 2.3 3区 医学 Q3 ENGINEERING, BIOMEDICAL Pub Date : 2024-10-01 Epub Date: 2024-07-23 DOI: 10.1007/s11548-024-03237-5
Renzo Phellan Aro, Bahe Hachem, Julien Clin, Jean-Marc Mac-Thiong, Luc Duong

Purpose: Adolescent idiopathic scoliosis is a chronic disease that may require correction surgery. The finite element method (FEM) is a popular option to plan the outcome of surgery on a patient-based model. However, it requires considerable computing power and time, which may discourage its use. Machine learning (ML) models can be a helpful surrogate to the FEM, providing accurate real-time responses. This work implements ML algorithms to estimate post-operative spinal shapes.

Methods: The algorithms are trained using features from 6400 simulations generated using the FEM from spine geometries of 64 patients. The features are selected using an autoencoder and principal component analysis. The accuracy of the results is evaluated by calculating the root-mean-squared error and the angle between the reference and predicted position of each vertebra. The processing times are also reported.

Results: A combination of principal component analysis for dimensionality reduction, followed by the linear regression model, generated accurate results in real-time, with an average position error of 3.75 mm and orientation angle error below 2.74 degrees in all main 3D axes, within 3 ms. The prediction time is considerably faster than simulations based on the FEM alone, which require seconds to minutes.

Conclusion: It is possible to predict post-operative spinal shapes of patients with AIS in real-time by using ML algorithms as a surrogate to the FEM. Clinicians can compare the response of the initial spine shape of a patient with AIS to various target shapes, which can be modified interactively. These benefits can encourage clinicians to use software tools for surgical planning of scoliosis.

目的:青少年特发性脊柱侧凸是一种慢性疾病,可能需要进行矫正手术。有限元法(FEM)是在基于患者的模型上规划手术结果的常用方法。然而,该方法需要大量的计算能力和时间,这可能会阻碍其使用。机器学习(ML)模型可以替代有限元法,提供准确的实时响应。这项工作采用 ML 算法来估计术后脊柱形状:方法:使用 6400 个模拟特征对算法进行训练,这些模拟特征来自 64 名患者的脊柱几何形状。使用自动编码器和主成分分析选择特征。通过计算均方根误差以及每个椎体的参考位置和预测位置之间的夹角来评估结果的准确性。同时还报告了处理时间:结果:采用主成分分析法进行降维,然后采用线性回归模型,在 3 毫秒内实时生成精确结果,所有主要三维轴的平均位置误差为 3.75 毫米,方向角误差低于 2.74 度。预测时间大大快于仅基于有限元模型的模拟,后者需要数秒至数分钟:结论:使用 ML 算法作为有限元模拟的替代方法,可以实时预测 AIS 患者术后的脊柱形状。临床医生可以比较 AIS 患者的初始脊柱形状对各种目标形状的反应,这些目标形状可以进行交互式修改。这些优势可以鼓励临床医生使用软件工具进行脊柱侧弯的手术规划。
{"title":"Real-time prediction of postoperative spinal shape with machine learning models trained on finite element biomechanical simulations.","authors":"Renzo Phellan Aro, Bahe Hachem, Julien Clin, Jean-Marc Mac-Thiong, Luc Duong","doi":"10.1007/s11548-024-03237-5","DOIUrl":"10.1007/s11548-024-03237-5","url":null,"abstract":"<p><strong>Purpose: </strong>Adolescent idiopathic scoliosis is a chronic disease that may require correction surgery. The finite element method (FEM) is a popular option to plan the outcome of surgery on a patient-based model. However, it requires considerable computing power and time, which may discourage its use. Machine learning (ML) models can be a helpful surrogate to the FEM, providing accurate real-time responses. This work implements ML algorithms to estimate post-operative spinal shapes.</p><p><strong>Methods: </strong>The algorithms are trained using features from 6400 simulations generated using the FEM from spine geometries of 64 patients. The features are selected using an autoencoder and principal component analysis. The accuracy of the results is evaluated by calculating the root-mean-squared error and the angle between the reference and predicted position of each vertebra. The processing times are also reported.</p><p><strong>Results: </strong>A combination of principal component analysis for dimensionality reduction, followed by the linear regression model, generated accurate results in real-time, with an average position error of 3.75 mm and orientation angle error below 2.74 degrees in all main 3D axes, within 3 ms. The prediction time is considerably faster than simulations based on the FEM alone, which require seconds to minutes.</p><p><strong>Conclusion: </strong>It is possible to predict post-operative spinal shapes of patients with AIS in real-time by using ML algorithms as a surrogate to the FEM. Clinicians can compare the response of the initial spine shape of a patient with AIS to various target shapes, which can be modified interactively. These benefits can encourage clinicians to use software tools for surgical planning of scoliosis.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"1983-1990"},"PeriodicalIF":2.3,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141753312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A novel contact optimization algorithm for endomicroscopic surface scanning. 用于内窥镜表面扫描的新型接触优化算法。
IF 2.3 3区 医学 Q3 ENGINEERING, BIOMEDICAL Pub Date : 2024-10-01 Epub Date: 2024-07-06 DOI: 10.1007/s11548-024-03223-x
Xingfeng Xu, Shengzhe Zhao, Lun Gong, Siyang Zuo

Purpose: Probe-based confocal laser endomicroscopy (pCLE) offers real-time, cell-level imaging and holds promise for early cancer diagnosis. However, a large area surface scanning for image acquisition is needed to overcome the limitation of field-of-view. Obtaining high-quality images during scanning requires maintaining a stable contact distance between the tissue and probe. This work presents a novel contact optimization algorithm to acquire high-quality pCLE images.

Methods: The contact optimization algorithm, based on swarm intelligence of whale optimization algorithm, is designed to optimize the probe position, according to the quality of the image acquired by probe. An accurate image quality assessment of total co-occurrence entropy is introduced to evaluate the pCLE image quality. The algorithm aims to maintain a consistent probe-tissue contact, resulting in high-quality images acquisition.

Results: Scanning experiments on sponge, ex vivo swine skin tissue and stomach tissue demonstrate the effectiveness of the contact optimization algorithm. Scanning results of the sponge with three different trajectories (spiral trajectory, circle trajectory, and raster trajectory) reveal high-quality mosaics with clear details in every part of the image and no blurred sections.

Conclusion: The contact optimization algorithm successfully identifies the optimal distance between probe and tissue, improving the quality of pCLE images. Experimental results confirm the high potential of this method in endomicroscopic surface scanning.

目的:探针共焦激光内窥镜(pCLE)提供实时、细胞级成像,有望用于早期癌症诊断。然而,为了克服视场的限制,需要进行大面积表面扫描以获取图像。在扫描过程中获取高质量图像需要保持组织与探针之间稳定的接触距离。本研究提出了一种新型接触优化算法,用于获取高质量的 pCLE 图像:方法:接触优化算法基于鲸群智能优化算法,旨在根据探针获取图像的质量优化探针位置。引入了精确的总共现熵图像质量评估来评价 pCLE 图像质量。该算法旨在保持探针与组织接触的一致性,从而获得高质量的图像:结果:对海绵、活体猪皮肤组织和胃组织的扫描实验证明了接触优化算法的有效性。采用三种不同轨迹(螺旋轨迹、圆轨迹和光栅轨迹)对海绵进行扫描的结果显示出高质量的马赛克图像,图像各部分细节清晰,没有模糊部分:接触优化算法成功确定了探头与组织之间的最佳距离,提高了 pCLE 图像的质量。实验结果证实了这种方法在内窥镜表面扫描中的巨大潜力。
{"title":"A novel contact optimization algorithm for endomicroscopic surface scanning.","authors":"Xingfeng Xu, Shengzhe Zhao, Lun Gong, Siyang Zuo","doi":"10.1007/s11548-024-03223-x","DOIUrl":"10.1007/s11548-024-03223-x","url":null,"abstract":"<p><strong>Purpose: </strong>Probe-based confocal laser endomicroscopy (pCLE) offers real-time, cell-level imaging and holds promise for early cancer diagnosis. However, a large area surface scanning for image acquisition is needed to overcome the limitation of field-of-view. Obtaining high-quality images during scanning requires maintaining a stable contact distance between the tissue and probe. This work presents a novel contact optimization algorithm to acquire high-quality pCLE images.</p><p><strong>Methods: </strong>The contact optimization algorithm, based on swarm intelligence of whale optimization algorithm, is designed to optimize the probe position, according to the quality of the image acquired by probe. An accurate image quality assessment of total co-occurrence entropy is introduced to evaluate the pCLE image quality. The algorithm aims to maintain a consistent probe-tissue contact, resulting in high-quality images acquisition.</p><p><strong>Results: </strong>Scanning experiments on sponge, ex vivo swine skin tissue and stomach tissue demonstrate the effectiveness of the contact optimization algorithm. Scanning results of the sponge with three different trajectories (spiral trajectory, circle trajectory, and raster trajectory) reveal high-quality mosaics with clear details in every part of the image and no blurred sections.</p><p><strong>Conclusion: </strong>The contact optimization algorithm successfully identifies the optimal distance between probe and tissue, improving the quality of pCLE images. Experimental results confirm the high potential of this method in endomicroscopic surface scanning.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"2031-2041"},"PeriodicalIF":2.3,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141545439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PolypNextLSTM: a lightweight and fast polyp video segmentation network using ConvNext and ConvLSTM. PolypNextLSTM:使用 ConvNext 和 ConvLSTM 的轻量级快速多边形视频分割网络。
IF 2.3 3区 医学 Q3 ENGINEERING, BIOMEDICAL Pub Date : 2024-10-01 Epub Date: 2024-08-08 DOI: 10.1007/s11548-024-03244-6
Debayan Bhattacharya, Konrad Reuter, Finn Behrendt, Lennart Maack, Sarah Grube, Alexander Schlaefer

Purpose: Commonly employed in polyp segmentation, single-image UNet architectures lack the temporal insight clinicians gain from video data in diagnosing polyps. To mirror clinical practices more faithfully, our proposed solution, PolypNextLSTM, leverages video-based deep learning, harnessing temporal information for superior segmentation performance with least parameter overhead, making it possibly suitable for edge devices.

Methods: PolypNextLSTM employs a UNet-like structure with ConvNext-Tiny as its backbone, strategically omitting the last two layers to reduce parameter overhead. Our temporal fusion module, a Convolutional Long Short Term Memory (ConvLSTM), effectively exploits temporal features. Our primary novelty lies in PolypNextLSTM, which stands out as the leanest in parameters and the fastest model, surpassing the performance of five state-of-the-art image and video-based deep learning models. The evaluation of the SUN-SEG dataset spans easy-to-detect and hard-to-detect polyp scenarios, along with videos containing challenging artefacts like fast motion and occlusion.

Results: Comparison against 5 image-based and 5 video-based models demonstrates PolypNextLSTM's superiority, achieving a Dice score of 0.7898 on the hard-to-detect polyp test set, surpassing image-based PraNet (0.7519) and video-based PNS+ (0.7486). Notably, our model excels in videos featuring complex artefacts such as ghosting and occlusion.

Conclusion: PolypNextLSTM, integrating pruned ConvNext-Tiny with ConvLSTM for temporal fusion, not only exhibits superior segmentation performance but also maintains the highest frames per speed among evaluated models. Code can be found here: https://github.com/mtec-tuhh/PolypNextLSTM .

目的:单图像 UNet 架构通常用于息肉分割,它缺乏临床医生在诊断息肉时从视频数据中获得的时间洞察力。为了更忠实地反映临床实践,我们提出的解决方案 PolypNextLSTM 利用基于视频的深度学习,以最少的参数开销利用时间信息获得卓越的分割性能,使其可能适用于边缘设备:PolypNextLSTM 采用类似 UNet 的结构,以 ConvNext-Tiny 为骨干,战略性地省略了最后两层,以减少参数开销。我们的时空融合模块--卷积长短期记忆(ConvLSTM)--有效地利用了时空特征。我们的主要创新在于 PolypNextLSTM,它是参数最精简、速度最快的模型,性能超过了五个基于图像和视频的最先进深度学习模型。对 SUN-SEG 数据集的评估涵盖了易检测和难检测的息肉场景,以及包含快速运动和闭塞等高难度伪影的视频:在难以检测的息肉测试集上,PolypNextLSTM 的 Dice 得分为 0.7898,超过了基于图像的 PraNet(0.7519)和基于视频的 PNS+(0.7486)。值得注意的是,我们的模型在具有重影和遮挡等复杂伪影的视频中表现出色:PolypNextLSTM将剪枝ConvNext-Tiny与ConvLSTM整合在一起进行时空融合,不仅表现出卓越的分割性能,而且在所评估的模型中保持了最高的帧速率。代码可在此处找到:https://github.com/mtec-tuhh/PolypNextLSTM 。
{"title":"PolypNextLSTM: a lightweight and fast polyp video segmentation network using ConvNext and ConvLSTM.","authors":"Debayan Bhattacharya, Konrad Reuter, Finn Behrendt, Lennart Maack, Sarah Grube, Alexander Schlaefer","doi":"10.1007/s11548-024-03244-6","DOIUrl":"10.1007/s11548-024-03244-6","url":null,"abstract":"<p><strong>Purpose: </strong>Commonly employed in polyp segmentation, single-image UNet architectures lack the temporal insight clinicians gain from video data in diagnosing polyps. To mirror clinical practices more faithfully, our proposed solution, PolypNextLSTM, leverages video-based deep learning, harnessing temporal information for superior segmentation performance with least parameter overhead, making it possibly suitable for edge devices.</p><p><strong>Methods: </strong>PolypNextLSTM employs a UNet-like structure with ConvNext-Tiny as its backbone, strategically omitting the last two layers to reduce parameter overhead. Our temporal fusion module, a Convolutional Long Short Term Memory (ConvLSTM), effectively exploits temporal features. Our primary novelty lies in PolypNextLSTM, which stands out as the leanest in parameters and the fastest model, surpassing the performance of five state-of-the-art image and video-based deep learning models. The evaluation of the SUN-SEG dataset spans easy-to-detect and hard-to-detect polyp scenarios, along with videos containing challenging artefacts like fast motion and occlusion.</p><p><strong>Results: </strong>Comparison against 5 image-based and 5 video-based models demonstrates PolypNextLSTM's superiority, achieving a Dice score of 0.7898 on the hard-to-detect polyp test set, surpassing image-based PraNet (0.7519) and video-based PNS+ (0.7486). Notably, our model excels in videos featuring complex artefacts such as ghosting and occlusion.</p><p><strong>Conclusion: </strong>PolypNextLSTM, integrating pruned ConvNext-Tiny with ConvLSTM for temporal fusion, not only exhibits superior segmentation performance but also maintains the highest frames per speed among evaluated models. Code can be found here: https://github.com/mtec-tuhh/PolypNextLSTM .</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"2111-2119"},"PeriodicalIF":2.3,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11442634/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141903519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Domain adaptation using AdaBN and AdaIN for high-resolution IVD mesh reconstruction from clinical MRI. 利用 AdaBN 和 AdaIN 进行领域适应,从临床核磁共振成像中重建高分辨率 IVD 网格。
IF 2.3 3区 医学 Q3 ENGINEERING, BIOMEDICAL Pub Date : 2024-10-01 Epub Date: 2024-07-13 DOI: 10.1007/s11548-024-03233-9
Sai Natarajan, Ludovic Humbert, Miguel A González Ballester

Purpose: Deep learning has firmly established its dominance in medical imaging applications. However, careful consideration must be exercised when transitioning a trained source model to adapt to an entirely distinct environment that deviates significantly from the training set. The majority of the efforts to mitigate this issue have predominantly focused on classification and segmentation tasks. In this work, we perform a domain adaptation of a trained source model to reconstruct high-resolution intervertebral disc meshes from low-resolution MRI.

Methods: To address the outlined challenges, we use MRI2Mesh as the shape reconstruction network. It incorporates three major modules: image encoder, mesh deformation, and cross-level feature fusion. This feature fusion module is used to encapsulate local and global disc features. We evaluate two major domain adaptation techniques: adaptive batch normalization (AdaBN) and adaptive instance normalization (AdaIN) for the task of shape reconstruction.

Results: Experiments conducted on distinct datasets, including data from different populations, machines, and test sites demonstrate the effectiveness of MRI2Mesh for domain adaptation. MRI2Mesh achieved up to a 14% decrease in Hausdorff distance (HD) and a 19% decrease in the point-to-surface (P2S) metric for both AdaBN and AdaIN experiments, indicating improved performance.

Conclusion: MRI2Mesh has demonstrated consistent superiority to the state-of-the-art Voxel2Mesh network across a diverse range of datasets, populations, and scanning protocols, highlighting its versatility. Additionally, AdaBN has emerged as a robust method compared to the AdaIN technique. Further experiments show that MRI2Mesh, when combined with AdaBN, holds immense promise for enhancing the precision of anatomical shape reconstruction in domain adaptation.

目的:深度学习已牢固确立了其在医学成像应用中的主导地位。然而,在将训练有素的源模型转换为适应完全不同的环境时,必须谨慎考虑,因为这种环境与训练集存在很大差异。为缓解这一问题所做的大部分努力主要集中在分类和分割任务上。在这项工作中,我们对训练有素的源模型进行了领域适应性调整,以便从低分辨率 MRI 重建高分辨率椎间盘网格:为了应对上述挑战,我们使用 MRI2Mesh 作为形状重建网络。它包含三个主要模块:图像编码器、网格变形和跨级特征融合。该特征融合模块用于封装局部和全局磁盘特征。我们评估了两种主要的领域适应技术:针对形状重建任务的自适应批量归一化(AdaBN)和自适应实例归一化(AdaIN):在不同的数据集(包括来自不同人群、机器和测试地点的数据)上进行的实验证明了 MRI2Mesh 在领域适应方面的有效性。在 AdaBN 和 AdaIN 实验中,MRI2Mesh 的 Hausdorff 距离(HD)最多减少了 14%,点到面(P2S)指标减少了 19%,表明性能有所提高:MRI2Mesh在不同的数据集、人群和扫描协议中都表现出了优于最先进的Voxel2Mesh网络的性能,突出了它的多功能性。此外,与 AdaIN 技术相比,AdaBN 是一种稳健的方法。进一步的实验表明,MRI2Mesh 与 AdaBN 相结合,有望在领域适应中提高解剖形状重建的精度。
{"title":"Domain adaptation using AdaBN and AdaIN for high-resolution IVD mesh reconstruction from clinical MRI.","authors":"Sai Natarajan, Ludovic Humbert, Miguel A González Ballester","doi":"10.1007/s11548-024-03233-9","DOIUrl":"10.1007/s11548-024-03233-9","url":null,"abstract":"<p><strong>Purpose: </strong>Deep learning has firmly established its dominance in medical imaging applications. However, careful consideration must be exercised when transitioning a trained source model to adapt to an entirely distinct environment that deviates significantly from the training set. The majority of the efforts to mitigate this issue have predominantly focused on classification and segmentation tasks. In this work, we perform a domain adaptation of a trained source model to reconstruct high-resolution intervertebral disc meshes from low-resolution MRI.</p><p><strong>Methods: </strong>To address the outlined challenges, we use MRI2Mesh as the shape reconstruction network. It incorporates three major modules: image encoder, mesh deformation, and cross-level feature fusion. This feature fusion module is used to encapsulate local and global disc features. We evaluate two major domain adaptation techniques: adaptive batch normalization (AdaBN) and adaptive instance normalization (AdaIN) for the task of shape reconstruction.</p><p><strong>Results: </strong>Experiments conducted on distinct datasets, including data from different populations, machines, and test sites demonstrate the effectiveness of MRI2Mesh for domain adaptation. MRI2Mesh achieved up to a 14% decrease in Hausdorff distance (HD) and a 19% decrease in the point-to-surface (P2S) metric for both AdaBN and AdaIN experiments, indicating improved performance.</p><p><strong>Conclusion: </strong>MRI2Mesh has demonstrated consistent superiority to the state-of-the-art Voxel2Mesh network across a diverse range of datasets, populations, and scanning protocols, highlighting its versatility. Additionally, AdaBN has emerged as a robust method compared to the AdaIN technique. Further experiments show that MRI2Mesh, when combined with AdaBN, holds immense promise for enhancing the precision of anatomical shape reconstruction in domain adaptation.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"2063-2068"},"PeriodicalIF":2.3,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141604462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards multimodal graph neural networks for surgical instrument anticipation. 面向手术器械预测的多模态图神经网络。
IF 2.3 3区 医学 Q3 ENGINEERING, BIOMEDICAL Pub Date : 2024-10-01 Epub Date: 2024-07-10 DOI: 10.1007/s11548-024-03226-8
Lars Wagner, Dennis N Schneider, Leon Mayer, Alissa Jell, Carolin Müller, Alexander Lenz, Alois Knoll, Dirk Wilhelm

Purpose: Decision support systems and context-aware assistance in the operating room have emerged as the key clinical applications supporting surgeons in their daily work and are generally based on single modalities. The model- and knowledge-based integration of multimodal data as a basis for decision support systems that can dynamically adapt to the surgical workflow has not yet been established. Therefore, we propose a knowledge-enhanced method for fusing multimodal data for anticipation tasks.

Methods: We developed a holistic, multimodal graph-based approach combining imaging and non-imaging information in a knowledge graph representing the intraoperative scene of a surgery. Node and edge features of the knowledge graph are extracted from suitable data sources in the operating room using machine learning. A spatiotemporal graph neural network architecture subsequently allows for interpretation of relational and temporal patterns within the knowledge graph. We apply our approach to the downstream task of instrument anticipation while presenting a suitable modeling and evaluation strategy for this task.

Results: Our approach achieves an F1 score of 66.86% in terms of instrument anticipation, allowing for a seamless surgical workflow and adding a valuable impact for surgical decision support systems. A resting recall of 63.33% indicates the non-prematurity of the anticipations.

Conclusion: This work shows how multimodal data can be combined with the topological properties of an operating room in a graph-based approach. Our multimodal graph architecture serves as a basis for context-sensitive decision support systems in laparoscopic surgery considering a comprehensive intraoperative operating scene.

目的:手术室中的决策支持系统和情境感知辅助系统已成为支持外科医生日常工作的关键临床应用,它们通常基于单一模式。基于模型和知识的多模态数据整合作为决策支持系统的基础,能够动态适应手术工作流程,这种方法尚未建立。因此,我们提出了一种知识增强型方法,用于融合多模态数据以完成预测任务:方法:我们开发了一种基于多模态图的整体方法,将成像和非成像信息结合到代表手术术中场景的知识图中。知识图谱的节点和边缘特征是通过机器学习从手术室的合适数据源中提取的。随后,时空图神经网络架构可以解释知识图谱中的关系和时间模式。我们将这一方法应用于仪器预测的下游任务,同时为这一任务提出了合适的建模和评估策略:我们的方法在仪器预测方面的 F1 得分为 66.86%,实现了无缝手术工作流程,为手术决策支持系统增添了宝贵的影响。63.33%的静态召回率表明预测结果并不成熟:这项工作展示了如何通过基于图的方法将多模态数据与手术室的拓扑特性相结合。我们的多模态图架构可作为腹腔镜手术中考虑术中综合操作场景的上下文敏感决策支持系统的基础。
{"title":"Towards multimodal graph neural networks for surgical instrument anticipation.","authors":"Lars Wagner, Dennis N Schneider, Leon Mayer, Alissa Jell, Carolin Müller, Alexander Lenz, Alois Knoll, Dirk Wilhelm","doi":"10.1007/s11548-024-03226-8","DOIUrl":"10.1007/s11548-024-03226-8","url":null,"abstract":"<p><strong>Purpose: </strong>Decision support systems and context-aware assistance in the operating room have emerged as the key clinical applications supporting surgeons in their daily work and are generally based on single modalities. The model- and knowledge-based integration of multimodal data as a basis for decision support systems that can dynamically adapt to the surgical workflow has not yet been established. Therefore, we propose a knowledge-enhanced method for fusing multimodal data for anticipation tasks.</p><p><strong>Methods: </strong>We developed a holistic, multimodal graph-based approach combining imaging and non-imaging information in a knowledge graph representing the intraoperative scene of a surgery. Node and edge features of the knowledge graph are extracted from suitable data sources in the operating room using machine learning. A spatiotemporal graph neural network architecture subsequently allows for interpretation of relational and temporal patterns within the knowledge graph. We apply our approach to the downstream task of instrument anticipation while presenting a suitable modeling and evaluation strategy for this task.</p><p><strong>Results: </strong>Our approach achieves an F1 score of 66.86% in terms of instrument anticipation, allowing for a seamless surgical workflow and adding a valuable impact for surgical decision support systems. A resting recall of 63.33% indicates the non-prematurity of the anticipations.</p><p><strong>Conclusion: </strong>This work shows how multimodal data can be combined with the topological properties of an operating room in a graph-based approach. Our multimodal graph architecture serves as a basis for context-sensitive decision support systems in laparoscopic surgery considering a comprehensive intraoperative operating scene.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"1929-1937"},"PeriodicalIF":2.3,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11442600/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141565062","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Applying artificial intelligence on EDA sensor data to predict stress on minimally invasive robotic-assisted surgery. 将人工智能应用于 EDA 传感器数据,以预测微创机器人辅助手术的压力。
IF 2.3 3区 医学 Q3 ENGINEERING, BIOMEDICAL Pub Date : 2024-10-01 Epub Date: 2024-07-02 DOI: 10.1007/s11548-024-03218-8
Daniel Caballero, Manuel J Pérez-Salazar, Juan A Sánchez-Margallo, Francisco M Sánchez-Margallo

Purpose: This study aims predicting the stress level based on the ergonomic (kinematic) and physiological (electrodermal activity-EDA, blood pressure and body temperature) parameters of the surgeon from their records collected in the previously immediate situation of a minimally invasive robotic surgery activity.

Methods: For this purpose, data related to the surgeon's ergonomic and physiological parameters were collected during twenty-six robotic-assisted surgical sessions completed by eleven surgeons with different experience levels. Once the dataset was generated, two preprocessing techniques were applied (scaled and normalized), these two datasets were divided into two subsets: with 80% of data for training and cross-validation, and 20% of data for test. Three predictive techniques (multiple linear regression-MLR, support vector machine-SVM and multilayer perceptron-MLP) were applied on training dataset to generate predictive models. Finally, these models were validated on cross-validation and test datasets. After each session, surgeons were asked to complete a survey of their feeling of stress. These data were compared with those obtained using predictive models.

Results: The results showed that MLR combined with the scaled preprocessing achieved the highest R2 coefficient and the lowest error for each parameter analyzed. Additionally, the results for the surgeons' surveys were highly correlated to the results obtained by the predictive models (R2 = 0.8253).

Conclusions: The linear models proposed in this study were successfully validated on cross-validation and test datasets. This fact demonstrates the possibility of predicting factors that help us to improve the surgeon's health during robotic surgery.

目的:本研究旨在根据微创机器人手术活动中收集到的外科医生记录,根据其人体工程学(运动学)和生理(皮电活动-EDA、血压和体温)参数预测其压力水平:为此,我们收集了具有不同经验水平的 11 名外科医生在 26 次机器人辅助手术过程中与外科医生人体工程学和生理参数相关的数据。数据集生成后,应用了两种预处理技术(缩放和归一化),这两个数据集被分为两个子集:80%的数据用于训练和交叉验证,20%的数据用于测试。在训练数据集上应用三种预测技术(多元线性回归-MLR、支持向量机-SVM 和多层感知器-MLP)生成预测模型。最后,在交叉验证和测试数据集上对这些模型进行验证。每次治疗结束后,外科医生都要填写一份压力感调查表。这些数据与使用预测模型获得的数据进行了比较:结果表明,结合比例预处理的 MLR 在每个分析参数上都获得了最高的 R2 系数和最低的误差。此外,外科医生的调查结果与预测模型得出的结果高度相关(R2 = 0.8253):本研究提出的线性模型在交叉验证和测试数据集上得到了成功验证。这一事实表明,预测有助于改善外科医生在机器人手术中健康状况的因素是有可能的。
{"title":"Applying artificial intelligence on EDA sensor data to predict stress on minimally invasive robotic-assisted surgery.","authors":"Daniel Caballero, Manuel J Pérez-Salazar, Juan A Sánchez-Margallo, Francisco M Sánchez-Margallo","doi":"10.1007/s11548-024-03218-8","DOIUrl":"10.1007/s11548-024-03218-8","url":null,"abstract":"<p><strong>Purpose: </strong>This study aims predicting the stress level based on the ergonomic (kinematic) and physiological (electrodermal activity-EDA, blood pressure and body temperature) parameters of the surgeon from their records collected in the previously immediate situation of a minimally invasive robotic surgery activity.</p><p><strong>Methods: </strong>For this purpose, data related to the surgeon's ergonomic and physiological parameters were collected during twenty-six robotic-assisted surgical sessions completed by eleven surgeons with different experience levels. Once the dataset was generated, two preprocessing techniques were applied (scaled and normalized), these two datasets were divided into two subsets: with 80% of data for training and cross-validation, and 20% of data for test. Three predictive techniques (multiple linear regression-MLR, support vector machine-SVM and multilayer perceptron-MLP) were applied on training dataset to generate predictive models. Finally, these models were validated on cross-validation and test datasets. After each session, surgeons were asked to complete a survey of their feeling of stress. These data were compared with those obtained using predictive models.</p><p><strong>Results: </strong>The results showed that MLR combined with the scaled preprocessing achieved the highest R<sup>2</sup> coefficient and the lowest error for each parameter analyzed. Additionally, the results for the surgeons' surveys were highly correlated to the results obtained by the predictive models (R<sup>2</sup> = 0.8253).</p><p><strong>Conclusions: </strong>The linear models proposed in this study were successfully validated on cross-validation and test datasets. This fact demonstrates the possibility of predicting factors that help us to improve the surgeon's health during robotic surgery.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"1953-1963"},"PeriodicalIF":2.3,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141494219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep learning-based segmentation of left ventricular myocardium on dynamic contrast-enhanced MRI: a comprehensive evaluation across temporal frames. 基于深度学习的动态对比增强核磁共振成像左心室心肌分割:跨时间帧综合评估
IF 2.3 3区 医学 Q3 ENGINEERING, BIOMEDICAL Pub Date : 2024-10-01 Epub Date: 2024-07-04 DOI: 10.1007/s11548-024-03221-z
Raufiya Jafari, Radhakrishan Verma, Vinayak Aggarwal, Rakesh Kumar Gupta, Anup Singh

Purpose: Cardiac perfusion MRI is vital for disease diagnosis, treatment planning, and risk stratification, with anomalies serving as markers of underlying ischemic pathologies. AI-assisted methods and tools enable accurate and efficient left ventricular (LV) myocardium segmentation on all DCE-MRI timeframes, offering a solution to the challenges posed by the multidimensional nature of the data. This study aims to develop and assess an automated method for LV myocardial segmentation on DCE-MRI data of a local hospital.

Methods: The study consists of retrospective DCE-MRI data from 55 subjects acquired at the local hospital using a 1.5 T MRI scanner. The dataset included subjects with and without cardiac abnormalities. The timepoint for the reference frame (post-contrast LV myocardium) was identified using standard deviation across the temporal sequences. Iterative image registration of other temporal images with respect to this reference image was performed using Maxwell's demons algorithm. The registered stack was fed to the model built using the U-Net framework for predicting the LV myocardium at all timeframes of DCE-MRI.

Results: The mean and standard deviation of the dice similarity coefficient (DSC) for myocardial segmentation using pre-trained network Net_cine is 0.78 ± 0.04, and for the fine-tuned network Net_dyn which predicts mask on all timeframes individually, it is 0.78 ± 0.03. The DSC for Net_dyn ranged from 0.71 to 0.93. The average DSC achieved for the reference frame is 0.82 ± 0.06.

Conclusion: The study proposed a fast and fully automated AI-assisted method to segment LV myocardium on all timeframes of DCE-MRI data. The method is robust, and its performance is independent of the intra-temporal sequence registration and can easily accommodate timeframes with potential registration errors.

目的:心脏灌注 MRI 对疾病诊断、治疗计划和风险分层至关重要,其异常可作为潜在缺血性病变的标记。人工智能辅助方法和工具可在所有 DCE-MRI 时间范围内实现准确、高效的左心室(LV)心肌分割,为解决数据的多维性所带来的挑战提供了解决方案。本研究旨在开发和评估一种自动方法,用于对一家地方医院的 DCE-MRI 数据进行左心室心肌分割:研究包括当地医院使用 1.5 T MRI 扫描仪采集的 55 名受试者的回顾性 DCE-MRI 数据。数据集包括有心脏异常和无心脏异常的受试者。参考框架(对比后左心室心肌)的时间点是通过各时间序列的标准偏差确定的。使用麦克斯韦恶魔算法对其他时间图像进行迭代图像配准。注册后的叠加图像被输入到使用 U-Net 框架建立的模型中,用于预测 DCE-MRI 所有时间段的左心室心肌:结果:使用预训练网络 Net_cine 进行心肌分割的骰子相似系数(DSC)的平均值和标准偏差为 0.78 ± 0.04,而单独预测所有时间帧掩膜的微调网络 Net_dyn 的骰子相似系数(DSC)的平均值和标准偏差为 0.78 ± 0.03。Net_dyn 的 DSC 在 0.71 到 0.93 之间。参考帧的平均 DSC 为 0.82 ± 0.06:该研究提出了一种快速、全自动的人工智能辅助方法,用于在 DCE-MRI 数据的所有时间框架上分割左心室心肌。该方法具有鲁棒性,其性能与时内序列配准无关,可轻松适应存在潜在配准误差的时帧。
{"title":"Deep learning-based segmentation of left ventricular myocardium on dynamic contrast-enhanced MRI: a comprehensive evaluation across temporal frames.","authors":"Raufiya Jafari, Radhakrishan Verma, Vinayak Aggarwal, Rakesh Kumar Gupta, Anup Singh","doi":"10.1007/s11548-024-03221-z","DOIUrl":"10.1007/s11548-024-03221-z","url":null,"abstract":"<p><strong>Purpose: </strong>Cardiac perfusion MRI is vital for disease diagnosis, treatment planning, and risk stratification, with anomalies serving as markers of underlying ischemic pathologies. AI-assisted methods and tools enable accurate and efficient left ventricular (LV) myocardium segmentation on all DCE-MRI timeframes, offering a solution to the challenges posed by the multidimensional nature of the data. This study aims to develop and assess an automated method for LV myocardial segmentation on DCE-MRI data of a local hospital.</p><p><strong>Methods: </strong>The study consists of retrospective DCE-MRI data from 55 subjects acquired at the local hospital using a 1.5 T MRI scanner. The dataset included subjects with and without cardiac abnormalities. The timepoint for the reference frame (post-contrast LV myocardium) was identified using standard deviation across the temporal sequences. Iterative image registration of other temporal images with respect to this reference image was performed using Maxwell's demons algorithm. The registered stack was fed to the model built using the U-Net framework for predicting the LV myocardium at all timeframes of DCE-MRI.</p><p><strong>Results: </strong>The mean and standard deviation of the dice similarity coefficient (DSC) for myocardial segmentation using pre-trained network Net_cine is 0.78 ± 0.04, and for the fine-tuned network Net_dyn which predicts mask on all timeframes individually, it is 0.78 ± 0.03. The DSC for Net_dyn ranged from 0.71 to 0.93. The average DSC achieved for the reference frame is 0.82 ± 0.06.</p><p><strong>Conclusion: </strong>The study proposed a fast and fully automated AI-assisted method to segment LV myocardium on all timeframes of DCE-MRI data. The method is robust, and its performance is independent of the intra-temporal sequence registration and can easily accommodate timeframes with potential registration errors.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"2055-2062"},"PeriodicalIF":2.3,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141535937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
International Journal of Computer Assisted Radiology and Surgery
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1