首页 > 最新文献

International Journal of Computer Assisted Radiology and Surgery最新文献

英文 中文
Computer-aided design and fabrication of nasal prostheses: a semi-automated algorithm using statistical shape modeling. 计算机辅助设计和制造鼻假体:使用统计形状建模的半自动化算法。
IF 2.3 3区 医学 Q3 ENGINEERING, BIOMEDICAL Pub Date : 2024-11-01 Epub Date: 2024-06-06 DOI: 10.1007/s11548-024-03206-y
T Bannink, M de Ridder, S Bouman, M J A van Alphen, R L P van Veen, M W M van den Brekel, M B Karakullukçu

Purpose: This research aimed to develop an innovative method for designing and fabricating nasal prostheses that reduces anaplastologist expertise dependency while maintaining quality and appearance, allowing patients to regain their normal facial appearance.

Methods: The method involved statistical shape modeling using a morphable face model and 3D data acquired through optical scanning or CT. An automated design process generated patient-specific fits and appearances using regular prosthesis materials and 3D printing of molds. Manual input was required for specific case-related details.

Results: The developed method met all predefined requirements, replacing analog impression-making and offering compatibility with various data acquisition methods. Prostheses created through this method exhibited equivalent aesthetics to conventionally fabricated ones while reducing the skill dependency typically associated with prosthetic design and fabrication.

Conclusions: This method provides a promising approach for both temporary and definitive nasal prostheses, with the potential for remote prosthesis fabrication in areas lacking anaplastology care. While new skills are required for data acquisition and algorithm control, these technologies are increasingly accessible. Further clinical studies will help validate its effectiveness, and ongoing technological advancements may lead to even more advanced and skill-independent prosthesis fabrication methods in the future.

目的:本研究旨在开发一种设计和制造鼻假体的创新方法,在保证质量和外观的同时,减少对整形外科医生专业知识的依赖,使患者能够恢复正常的面部外观:方法:该方法涉及使用可变形面部模型和通过光学扫描或 CT 获取的三维数据进行统计形状建模。自动设计流程使用常规假体材料和三维打印模具,生成针对患者的合身度和外观。与具体病例相关的细节需要手动输入:结果:所开发的方法满足了所有预定要求,取代了模拟印模制作,并与各种数据采集方法兼容。通过这种方法制作的假体与传统制作的假体具有同等的美观度,同时减少了假体设计和制作过程中对技能的依赖:结论:这种方法为临时性和确定性鼻假体的制作提供了一种很有前景的方法,并有可能在缺乏鼻假体护理的地区实现远程假体制作。虽然数据采集和算法控制需要新的技能,但这些技术越来越容易获得。进一步的临床研究将有助于验证其有效性,而技术的不断进步可能会在未来带来更先进的、不依赖技能的假体制作方法。
{"title":"Computer-aided design and fabrication of nasal prostheses: a semi-automated algorithm using statistical shape modeling.","authors":"T Bannink, M de Ridder, S Bouman, M J A van Alphen, R L P van Veen, M W M van den Brekel, M B Karakullukçu","doi":"10.1007/s11548-024-03206-y","DOIUrl":"10.1007/s11548-024-03206-y","url":null,"abstract":"<p><strong>Purpose: </strong>This research aimed to develop an innovative method for designing and fabricating nasal prostheses that reduces anaplastologist expertise dependency while maintaining quality and appearance, allowing patients to regain their normal facial appearance.</p><p><strong>Methods: </strong>The method involved statistical shape modeling using a morphable face model and 3D data acquired through optical scanning or CT. An automated design process generated patient-specific fits and appearances using regular prosthesis materials and 3D printing of molds. Manual input was required for specific case-related details.</p><p><strong>Results: </strong>The developed method met all predefined requirements, replacing analog impression-making and offering compatibility with various data acquisition methods. Prostheses created through this method exhibited equivalent aesthetics to conventionally fabricated ones while reducing the skill dependency typically associated with prosthetic design and fabrication.</p><p><strong>Conclusions: </strong>This method provides a promising approach for both temporary and definitive nasal prostheses, with the potential for remote prosthesis fabrication in areas lacking anaplastology care. While new skills are required for data acquisition and algorithm control, these technologies are increasingly accessible. Further clinical studies will help validate its effectiveness, and ongoing technological advancements may lead to even more advanced and skill-independent prosthesis fabrication methods in the future.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"2279-2285"},"PeriodicalIF":2.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11541403/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141285345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Preliminary study of substantia nigra analysis by tensorial feature extraction. 通过张量特征提取进行黑质分析的初步研究
IF 2.3 3区 医学 Q3 ENGINEERING, BIOMEDICAL Pub Date : 2024-11-01 Epub Date: 2024-06-27 DOI: 10.1007/s11548-024-03175-2
Hayato Itoh, Masahiro Oda, Shinji Saiki, Koji Kamagata, Wataru Sako, Kei-Ichi Ishikawa, Nobutaka Hattori, Shigeki Aoki, Kensaku Mori

Purpose: Parkinson disease (PD) is a common progressive neurodegenerative disorder in our ageing society. Early-stage PD biomarkers are desired for timely clinical intervention and understanding of pathophysiology. Since one of the characteristics of PD is the progressive loss of dopaminergic neurons in the substantia nigra pars compacta, we propose a feature extraction method for analysing the differences in the substantia nigra between PD and non-PD patients.

Method: We propose a feature-extraction method for volumetric images based on a rank-1 tensor decomposition. Furthermore, we apply a feature selection method that excludes common features between PD and non-PD. We collect neuromelanin images of 263 patients: 124 PD and 139 non-PD patients and divide them into training and testing datasets for experiments. We then experimentally evaluate the classification accuracy of the substantia nigra between PD and non-PD patients using the proposed feature extraction method and linear discriminant analysis.

Results: The proposed method achieves a sensitivity of 0.72 and a specificity of 0.64 for our testing dataset of 66 non-PD and 42 PD patients. Furthermore, we visualise the important patterns in the substantia nigra by a linear combination of rank-1 tensors with selected features. The visualised patterns include the ventrolateral tier, where the severe loss of neurons can be observed in PD.

Conclusions: We develop a new feature-extraction method for the analysis of the substantia nigra towards PD diagnosis. In the experiments, even though the classification accuracy with the proposed feature extraction method and linear discriminant analysis is lower than that of expert physicians, the results suggest the potential of tensorial feature extraction.

目的:帕金森病(PD)是老龄化社会中常见的进行性神经退行性疾病。为了及时进行临床干预和了解病理生理学,我们需要早期帕金森病生物标志物。由于帕金森氏症的特征之一是黑质紧实部多巴胺能神经元的逐渐丧失,我们提出了一种特征提取方法,用于分析帕金森氏症患者和非帕金森氏症患者黑质的差异:方法:我们提出了一种基于秩-1张量分解的容积图像特征提取方法。方法:我们提出了基于秩-1张量分解的容积图像特征提取方法,并应用特征选择方法排除了帕金森病和非帕金森病之间的共同特征。我们收集了 263 名患者的神经黑素图像:我们收集了 263 名患者的神经黑素图像:124 名帕金森病患者和 139 名非帕金森病患者,并将其分为训练数据集和测试数据集进行实验。然后,我们利用所提出的特征提取方法和线性判别分析对黑质和非黑质病变患者的分类准确性进行了实验评估:结果:对于由 66 名非帕金森病患者和 42 名帕金森病患者组成的测试数据集,所提出的方法达到了 0.72 的灵敏度和 0.64 的特异性。此外,我们还通过秩-1张量与选定特征的线性组合,将黑质中的重要模式可视化。可视化模式包括腹外侧层,在腹外侧层可以观察到帕金森病患者神经元的严重损失:我们开发了一种新的特征提取方法,用于分析黑质以诊断帕金森病。在实验中,尽管使用所提出的特征提取方法和线性判别分析的分类准确率低于专家医师的分类准确率,但实验结果表明了张量特征提取的潜力。
{"title":"Preliminary study of substantia nigra analysis by tensorial feature extraction.","authors":"Hayato Itoh, Masahiro Oda, Shinji Saiki, Koji Kamagata, Wataru Sako, Kei-Ichi Ishikawa, Nobutaka Hattori, Shigeki Aoki, Kensaku Mori","doi":"10.1007/s11548-024-03175-2","DOIUrl":"10.1007/s11548-024-03175-2","url":null,"abstract":"<p><strong>Purpose: </strong>Parkinson disease (PD) is a common progressive neurodegenerative disorder in our ageing society. Early-stage PD biomarkers are desired for timely clinical intervention and understanding of pathophysiology. Since one of the characteristics of PD is the progressive loss of dopaminergic neurons in the substantia nigra pars compacta, we propose a feature extraction method for analysing the differences in the substantia nigra between PD and non-PD patients.</p><p><strong>Method: </strong>We propose a feature-extraction method for volumetric images based on a rank-1 tensor decomposition. Furthermore, we apply a feature selection method that excludes common features between PD and non-PD. We collect neuromelanin images of 263 patients: 124 PD and 139 non-PD patients and divide them into training and testing datasets for experiments. We then experimentally evaluate the classification accuracy of the substantia nigra between PD and non-PD patients using the proposed feature extraction method and linear discriminant analysis.</p><p><strong>Results: </strong>The proposed method achieves a sensitivity of 0.72 and a specificity of 0.64 for our testing dataset of 66 non-PD and 42 PD patients. Furthermore, we visualise the important patterns in the substantia nigra by a linear combination of rank-1 tensors with selected features. The visualised patterns include the ventrolateral tier, where the severe loss of neurons can be observed in PD.</p><p><strong>Conclusions: </strong>We develop a new feature-extraction method for the analysis of the substantia nigra towards PD diagnosis. In the experiments, even though the classification accuracy with the proposed feature extraction method and linear discriminant analysis is lower than that of expert physicians, the results suggest the potential of tensorial feature extraction.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"2133-2142"},"PeriodicalIF":2.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141460645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Aortic roadmapping during EVAR: a combined FEM-EM tracking feasibility study. EVAR 期间的主动脉路线图:FEM-EM 联合追踪可行性研究。
IF 2.3 3区 医学 Q3 ENGINEERING, BIOMEDICAL Pub Date : 2024-11-01 Epub Date: 2024-06-02 DOI: 10.1007/s11548-024-03187-y
Monica Emendi, Geir A Tangen, Pierluigi Di Giovanni, Håvard Ulsaker, Reidar Brekken, Frode Manstad-Hulaas, Victorien Prot, Aline Bel-Brunon, Karen H Støverud

Purpose: Currently, the intra-operative visualization of vessels during endovascular aneurysm repair (EVAR) relies on contrast-based imaging modalities. Moreover, traditional image fusion techniques lack a continuous and automatic update of the vessel configuration, which changes due to the insertion of stiff guidewires. The purpose of this work is to develop and evaluate a novel approach to improve image fusion, that takes into account the deformations, combining electromagnetic (EM) tracking technology and finite element modeling (FEM).

Methods: To assess whether EM tracking can improve the prediction of the numerical simulations, a patient-specific model of abdominal aorta was segmented and manufactured. A database of simulations with different insertion angles was created. Then, an ad hoc sensorized tool with three embedded EM sensors was designed, enabling tracking of the sensors' positions during the insertion phase. Finally, the corresponding cone beam computed tomography (CBCT) images were acquired and processed to obtain the ground truth aortic deformations of the manufactured model.

Results: Among the simulations in the database, the one minimizing the in silico versus in vitro discrepancy in terms of sensors' positions gave the most accurate aortic displacement results.

Conclusions: The proposed approach suggests that the EM tracking technology could be used not only to follow the tool, but also to minimize the error in the predicted aortic roadmap, thus paving the way for a safer EVAR navigation.

目的:目前,血管内动脉瘤修补术(EVAR)的术中血管可视化主要依赖于造影剂成像模式。此外,传统的图像融合技术缺乏对血管结构的连续自动更新,而血管结构会因插入坚硬的导丝而发生变化。这项工作的目的是结合电磁(EM)跟踪技术和有限元建模(FEM),开发并评估一种考虑到变形的新型图像融合方法:方法:为了评估电磁追踪是否能提高数值模拟的预测效果,我们分割并制作了一个患者专用的腹主动脉模型。建立了不同插入角度的模拟数据库。然后,设计了一个带有三个嵌入式电磁传感器的临时传感器化工具,以便在插入阶段跟踪传感器的位置。最后,采集并处理相应的锥形束计算机断层扫描(CBCT)图像,以获得制造模型的主动脉变形地面实况:结果:在数据库中的模拟结果中,传感器位置硅学与体外差异最小的模拟结果得到了最准确的主动脉位移结果:结论:所提出的方法表明,电磁跟踪技术不仅可用于跟踪工具,还可将主动脉路线图预测的误差降至最低,从而为更安全的 EVAR 导航铺平道路。
{"title":"Aortic roadmapping during EVAR: a combined FEM-EM tracking feasibility study.","authors":"Monica Emendi, Geir A Tangen, Pierluigi Di Giovanni, Håvard Ulsaker, Reidar Brekken, Frode Manstad-Hulaas, Victorien Prot, Aline Bel-Brunon, Karen H Støverud","doi":"10.1007/s11548-024-03187-y","DOIUrl":"10.1007/s11548-024-03187-y","url":null,"abstract":"<p><strong>Purpose: </strong>Currently, the intra-operative visualization of vessels during endovascular aneurysm repair (EVAR) relies on contrast-based imaging modalities. Moreover, traditional image fusion techniques lack a continuous and automatic update of the vessel configuration, which changes due to the insertion of stiff guidewires. The purpose of this work is to develop and evaluate a novel approach to improve image fusion, that takes into account the deformations, combining electromagnetic (EM) tracking technology and finite element modeling (FEM).</p><p><strong>Methods: </strong>To assess whether EM tracking can improve the prediction of the numerical simulations, a patient-specific model of abdominal aorta was segmented and manufactured. A database of simulations with different insertion angles was created. Then, an ad hoc sensorized tool with three embedded EM sensors was designed, enabling tracking of the sensors' positions during the insertion phase. Finally, the corresponding cone beam computed tomography (CBCT) images were acquired and processed to obtain the ground truth aortic deformations of the manufactured model.</p><p><strong>Results: </strong>Among the simulations in the database, the one minimizing the in silico versus in vitro discrepancy in terms of sensors' positions gave the most accurate aortic displacement results.</p><p><strong>Conclusions: </strong>The proposed approach suggests that the EM tracking technology could be used not only to follow the tool, but also to minimize the error in the predicted aortic roadmap, thus paving the way for a safer EVAR navigation.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"2239-2247"},"PeriodicalIF":2.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11541383/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141186916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An analysis on the effect of body tissues and surgical tools on workflow recognition in first person surgical videos. 分析人体组织和手术工具对第一人称手术视频中工作流程识别的影响。
IF 2.3 3区 医学 Q3 ENGINEERING, BIOMEDICAL Pub Date : 2024-11-01 Epub Date: 2024-02-27 DOI: 10.1007/s11548-024-03074-6
Hisako Tomita, Naoto Ienaga, Hiroki Kajita, Tetsu Hayashida, Maki Sugimoto

Purpose: Analysis of operative fields is expected to aid in estimating procedural workflow and evaluating surgeons' procedural skills by considering the temporal transitions during the progression of the surgery. This study aims to propose an automatic recognition system for the procedural workflow by employing machine learning techniques to identify and distinguish elements in the operative field, including body tissues such as fat, muscle, and dermis, along with surgical tools.

Methods: We conducted annotations on approximately 908 first-person-view images of breast surgery to facilitate segmentation. The annotated images were used to train a pixel-level classifier based on Mask R-CNN. To assess the impact on procedural workflow recognition, we annotated an additional 43,007 images. The network, structured on the Transformer architecture, was then trained with surgical images incorporating masks for body tissues and surgical tools.

Results: The instance segmentation of each body tissue in the segmentation phase provided insights into the trend of area transitions for each tissue. Simultaneously, the spatial features of the surgical tools were effectively captured. In regard to the accuracy of procedural workflow recognition, accounting for body tissues led to an average improvement of 3 % over the baseline. Furthermore, the inclusion of surgical tools yielded an additional increase in accuracy by 4 % compared to the baseline.

Conclusion: In this study, we revealed the contribution of the temporal transition of the body tissues and surgical tools spatial features to recognize procedural workflow in first-person-view surgical videos. Body tissues, especially in open surgery, can be a crucial element. This study suggests that further improvements can be achieved by accurately identifying surgical tools specific to each procedural workflow step.

目的:通过考虑手术过程中的时间转换,手术视野分析有望帮助估算手术流程和评估外科医生的手术技能。本研究旨在通过采用机器学习技术来识别和区分手术视野中的元素,包括脂肪、肌肉和真皮等人体组织以及手术工具,从而提出一种手术流程自动识别系统:我们对大约 908 张第一人称视角的乳腺手术图像进行了注释,以方便分割。注释图像用于训练基于掩膜 R-CNN 的像素级分类器。为了评估对程序性工作流程识别的影响,我们对另外 43,007 张图像进行了标注。然后,使用包含身体组织和手术工具掩码的手术图像对基于 Transformer 架构的网络进行了训练:结果:在分割阶段对每个身体组织进行实例分割,可以深入了解每个组织的区域转换趋势。同时,手术工具的空间特征也得到了有效捕捉。在程序工作流程识别的准确性方面,考虑到身体组织后,比基线平均提高了 3%。此外,加入手术工具后,准确率比基线提高了 4%:在这项研究中,我们揭示了身体组织的时间过渡和手术工具的空间特征对识别第一人称视角手术视频中手术流程的贡献。身体组织,尤其是开放手术中的身体组织,可能是一个关键因素。这项研究表明,通过准确识别每个手术流程步骤所特有的手术工具,可以进一步提高识别率。
{"title":"An analysis on the effect of body tissues and surgical tools on workflow recognition in first person surgical videos.","authors":"Hisako Tomita, Naoto Ienaga, Hiroki Kajita, Tetsu Hayashida, Maki Sugimoto","doi":"10.1007/s11548-024-03074-6","DOIUrl":"10.1007/s11548-024-03074-6","url":null,"abstract":"<p><strong>Purpose: </strong>Analysis of operative fields is expected to aid in estimating procedural workflow and evaluating surgeons' procedural skills by considering the temporal transitions during the progression of the surgery. This study aims to propose an automatic recognition system for the procedural workflow by employing machine learning techniques to identify and distinguish elements in the operative field, including body tissues such as fat, muscle, and dermis, along with surgical tools.</p><p><strong>Methods: </strong>We conducted annotations on approximately 908 first-person-view images of breast surgery to facilitate segmentation. The annotated images were used to train a pixel-level classifier based on Mask R-CNN. To assess the impact on procedural workflow recognition, we annotated an additional 43,007 images. The network, structured on the Transformer architecture, was then trained with surgical images incorporating masks for body tissues and surgical tools.</p><p><strong>Results: </strong>The instance segmentation of each body tissue in the segmentation phase provided insights into the trend of area transitions for each tissue. Simultaneously, the spatial features of the surgical tools were effectively captured. In regard to the accuracy of procedural workflow recognition, accounting for body tissues led to an average improvement of 3 % over the baseline. Furthermore, the inclusion of surgical tools yielded an additional increase in accuracy by 4 % compared to the baseline.</p><p><strong>Conclusion: </strong>In this study, we revealed the contribution of the temporal transition of the body tissues and surgical tools spatial features to recognize procedural workflow in first-person-view surgical videos. Body tissues, especially in open surgery, can be a crucial element. This study suggests that further improvements can be achieved by accurately identifying surgical tools specific to each procedural workflow step.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"2195-2202"},"PeriodicalIF":2.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11541397/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139974449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Background removal for debiasing computer-aided cytological diagnosis. 为计算机辅助细胞学诊断去除背景杂质。
IF 2.3 3区 医学 Q3 ENGINEERING, BIOMEDICAL Pub Date : 2024-11-01 Epub Date: 2024-06-25 DOI: 10.1007/s11548-024-03169-0
Keita Takeda, Tomoya Sakai, Eiji Mitate

To address the background-bias problem in computer-aided cytology caused by microscopic slide deterioration, this article proposes a deep learning approach for cell segmentation and background removal without requiring cell annotation. A U-Net-based model was trained to separate cells from the background in an unsupervised manner by leveraging the redundancy of the background and the sparsity of cells in liquid-based cytology (LBC) images. The experimental results demonstrate that the U-Net-based model trained on a small set of cytology images can exclude background features and accurately segment cells. This capability is beneficial for debiasing in the detection and classification of the cells of interest in oral LBC. Slide deterioration can significantly affect deep learning-based cell classification. Our proposed method effectively removes background features at no cost of cell annotation, thereby enabling accurate cytological diagnosis through the deep learning of microscopic slide images.

为了解决显微载玻片变质导致的计算机辅助细胞学中的背景偏差问题,本文提出了一种无需细胞注释即可进行细胞分割和背景去除的深度学习方法。利用液基细胞学(LBC)图像中背景的冗余性和细胞的稀疏性,训练了一个基于 U-Net 的模型,以无监督的方式将细胞从背景中分离出来。实验结果表明,基于 U-Net 的模型在一小部分细胞学图像上经过训练后,可以排除背景特征,准确分割细胞。这种能力有利于在口腔 LBC 中对感兴趣的细胞进行检测和分类。切片劣化会严重影响基于深度学习的细胞分类。我们提出的方法能在不影响细胞标注的情况下有效去除背景特征,从而通过对显微载玻片图像的深度学习实现准确的细胞学诊断。
{"title":"Background removal for debiasing computer-aided cytological diagnosis.","authors":"Keita Takeda, Tomoya Sakai, Eiji Mitate","doi":"10.1007/s11548-024-03169-0","DOIUrl":"10.1007/s11548-024-03169-0","url":null,"abstract":"<p><p>To address the background-bias problem in computer-aided cytology caused by microscopic slide deterioration, this article proposes a deep learning approach for cell segmentation and background removal without requiring cell annotation. A U-Net-based model was trained to separate cells from the background in an unsupervised manner by leveraging the redundancy of the background and the sparsity of cells in liquid-based cytology (LBC) images. The experimental results demonstrate that the U-Net-based model trained on a small set of cytology images can exclude background features and accurately segment cells. This capability is beneficial for debiasing in the detection and classification of the cells of interest in oral LBC. Slide deterioration can significantly affect deep learning-based cell classification. Our proposed method effectively removes background features at no cost of cell annotation, thereby enabling accurate cytological diagnosis through the deep learning of microscopic slide images.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"2165-2174"},"PeriodicalIF":2.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11541310/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141452132","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hybrid representation-enhanced sampling for Bayesian active learning in musculoskeletal segmentation of lower extremities. 贝叶斯主动学习下肢肌肉骨骼分割中的混合表示增强采样。
IF 2.3 3区 医学 Q3 ENGINEERING, BIOMEDICAL Pub Date : 2024-11-01 Epub Date: 2024-01-29 DOI: 10.1007/s11548-024-03065-7
Ganping Li, Yoshito Otake, Mazen Soufi, Masashi Taniguchi, Masahide Yagi, Noriaki Ichihashi, Keisuke Uemura, Masaki Takao, Nobuhiko Sugano, Yoshinobu Sato

Purpose: Manual annotations for training deep learning models in auto-segmentation are time-intensive. This study introduces a hybrid representation-enhanced sampling strategy that integrates both density and diversity criteria within an uncertainty-based Bayesian active learning (BAL) framework to reduce annotation efforts by selecting the most informative training samples.

Methods: The experiments are performed on two lower extremity datasets of MRI and CT images, focusing on the segmentation of the femur, pelvis, sacrum, quadriceps femoris, hamstrings, adductors, sartorius, and iliopsoas, utilizing a U-net-based BAL framework. Our method selects uncertain samples with high density and diversity for manual revision, optimizing for maximal similarity to unlabeled instances and minimal similarity to existing training data. We assess the accuracy and efficiency using dice and a proposed metric called reduced annotation cost (RAC), respectively. We further evaluate the impact of various acquisition rules on BAL performance and design an ablation study for effectiveness estimation.

Results: In MRI and CT datasets, our method was superior or comparable to existing ones, achieving a 0.8% dice and 1.0% RAC increase in CT (statistically significant), and a 0.8% dice and 1.1% RAC increase in MRI (not statistically significant) in volume-wise acquisition. Our ablation study indicates that combining density and diversity criteria enhances the efficiency of BAL in musculoskeletal segmentation compared to using either criterion alone.

Conclusion: Our sampling method is proven efficient in reducing annotation costs in image segmentation tasks. The combination of the proposed method and our BAL framework provides a semi-automatic way for efficient annotation of medical image datasets.

目的:为训练自动分割中的深度学习模型而进行人工标注耗费大量时间。本研究介绍了一种混合表示增强采样策略,该策略在基于不确定性的贝叶斯主动学习(BAL)框架内整合了密度和多样性标准,通过选择信息量最大的训练样本来减少标注工作:实验在两个下肢数据集的核磁共振成像和 CT 图像上进行,重点是股骨、骨盆、骶骨、股四头肌、腘绳肌、内收肌、腓肠肌和髂腰肌的分割,并利用基于 U 网的 BAL 框架。我们的方法选择具有高密度和多样性的不确定样本进行人工修正,优化与未标记实例的最大相似性和与现有训练数据的最小相似性。我们分别使用骰子和一种称为降低标注成本(RAC)的拟议指标来评估准确性和效率。我们还进一步评估了各种采集规则对 BAL 性能的影响,并设计了一项消融研究来估算有效性:在 MRI 和 CT 数据集中,我们的方法优于或媲美现有的方法,在 CT 中实现了 0.8% 的骰子增加和 1.0% 的 RAC 增加(有统计学意义),在 MRI 中实现了 0.8% 的骰子增加和 1.1% 的 RAC 增加(无统计学意义)。我们的消融研究表明,与单独使用其中一种标准相比,结合密度和多样性标准可提高 BAL 在肌肉骨骼分割中的效率:结论:事实证明,我们的采样方法能有效降低图像分割任务中的注释成本。建议的方法与我们的 BAL 框架相结合,为高效注释医学图像数据集提供了一种半自动方法。
{"title":"Hybrid representation-enhanced sampling for Bayesian active learning in musculoskeletal segmentation of lower extremities.","authors":"Ganping Li, Yoshito Otake, Mazen Soufi, Masashi Taniguchi, Masahide Yagi, Noriaki Ichihashi, Keisuke Uemura, Masaki Takao, Nobuhiko Sugano, Yoshinobu Sato","doi":"10.1007/s11548-024-03065-7","DOIUrl":"10.1007/s11548-024-03065-7","url":null,"abstract":"<p><strong>Purpose: </strong>Manual annotations for training deep learning models in auto-segmentation are time-intensive. This study introduces a hybrid representation-enhanced sampling strategy that integrates both density and diversity criteria within an uncertainty-based Bayesian active learning (BAL) framework to reduce annotation efforts by selecting the most informative training samples.</p><p><strong>Methods: </strong>The experiments are performed on two lower extremity datasets of MRI and CT images, focusing on the segmentation of the femur, pelvis, sacrum, quadriceps femoris, hamstrings, adductors, sartorius, and iliopsoas, utilizing a U-net-based BAL framework. Our method selects uncertain samples with high density and diversity for manual revision, optimizing for maximal similarity to unlabeled instances and minimal similarity to existing training data. We assess the accuracy and efficiency using dice and a proposed metric called reduced annotation cost (RAC), respectively. We further evaluate the impact of various acquisition rules on BAL performance and design an ablation study for effectiveness estimation.</p><p><strong>Results: </strong>In MRI and CT datasets, our method was superior or comparable to existing ones, achieving a 0.8% dice and 1.0% RAC increase in CT (statistically significant), and a 0.8% dice and 1.1% RAC increase in MRI (not statistically significant) in volume-wise acquisition. Our ablation study indicates that combining density and diversity criteria enhances the efficiency of BAL in musculoskeletal segmentation compared to using either criterion alone.</p><p><strong>Conclusion: </strong>Our sampling method is proven efficient in reducing annotation costs in image segmentation tasks. The combination of the proposed method and our BAL framework provides a semi-automatic way for efficient annotation of medical image datasets.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"2177-2186"},"PeriodicalIF":2.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139571189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Domain transformation using semi-supervised CycleGAN for improving performance of classifying thyroid tissue images. 利用半监督 CycleGAN 进行领域转换,提高甲状腺组织图像的分类性能。
IF 2.3 3区 医学 Q3 ENGINEERING, BIOMEDICAL Pub Date : 2024-11-01 Epub Date: 2024-01-18 DOI: 10.1007/s11548-024-03061-x
Yoshihito Ichiuji, Shingo Mabu, Satomi Hatta, Kunihiro Inai, Shohei Higuchi, Shoji Kido

Purpose: A large number of research has been conducted on the classification of medical images using deep learning. The thyroid tissue images can be also classified by cancer types. Deep learning requires a large amount of data, but every medical institution cannot collect sufficient number of data for deep learning. In that case, we can consider a case where a classifier trained at a certain medical institution that has a sufficient number of data is reused at other institutions. However, when using data from multiple institutions, it is necessary to unify the feature distribution because the feature of the data differs due to differences in data acquisition conditions.

Methods: To unify the feature distribution, the data from Institution T are transformed to have the closer distribution to that from Institution S by applying a domain transformation using semi-supervised CycleGAN. The proposed method enhances CycleGAN considering the feature distribution of classes for making appropriate domain transformation for classification. In addition, to address the problem of imbalanced data with different numbers of data for each cancer type, several methods dealing with imbalanced data are applied to semi-supervised CycleGAN.

Results: The experimental results showed that the classification performance was enhanced when the dataset from Institution S was used as training data and the testing dataset from Institution T was classified after applying domain transformation. In addition, focal loss contributed to improving the mean F1 score the best as a method that addresses the class imbalance.

Conclusion: The proposed method achieved the domain transformation of thyroid tissue images between two domains, where it retained the important features related to the classes across domains and showed the best F1 score with significant differences compared with other methods. In addition, the proposed method was further enhanced by addressing the class imbalance of the dataset.

目的:人们利用深度学习对医学图像进行了大量分类研究。甲状腺组织图像也可按癌症类型进行分类。深度学习需要大量数据,但每个医疗机构都无法收集到足够数量的数据用于深度学习。在这种情况下,我们可以考虑将某个医疗机构训练的分类器在其他医疗机构重复使用,因为该医疗机构拥有足够数量的数据。但是,在使用多个机构的数据时,由于数据获取条件的不同,数据的特征也不尽相同,因此有必要统一特征分布:为了统一特征分布,使用半监督 CycleGAN 进行域转换,将来自 T 机构的数据转换为与来自 S 机构的数据分布更接近的数据。所提出的方法增强了 CycleGAN 的功能,考虑到了类的特征分布,从而为分类进行适当的域转换。此外,为了解决每种癌症类型的数据数量不同的不平衡数据问题,在半监督 CycleGAN 中应用了几种处理不平衡数据的方法:实验结果表明,当使用 S 机构的数据集作为训练数据,并对 T 机构的测试数据集进行域转换后进行分类时,分类性能得到了提高。此外,作为一种解决类不平衡的方法,焦点丢失对提高平均 F1 分数的贡献最大:结论:所提出的方法实现了甲状腺组织图像在两个域之间的域转换,保留了与跨域类别相关的重要特征,与其他方法相比,F1得分最高,差异显著。此外,通过解决数据集的类不平衡问题,所提出的方法得到了进一步增强。
{"title":"Domain transformation using semi-supervised CycleGAN for improving performance of classifying thyroid tissue images.","authors":"Yoshihito Ichiuji, Shingo Mabu, Satomi Hatta, Kunihiro Inai, Shohei Higuchi, Shoji Kido","doi":"10.1007/s11548-024-03061-x","DOIUrl":"10.1007/s11548-024-03061-x","url":null,"abstract":"<p><strong>Purpose: </strong>A large number of research has been conducted on the classification of medical images using deep learning. The thyroid tissue images can be also classified by cancer types. Deep learning requires a large amount of data, but every medical institution cannot collect sufficient number of data for deep learning. In that case, we can consider a case where a classifier trained at a certain medical institution that has a sufficient number of data is reused at other institutions. However, when using data from multiple institutions, it is necessary to unify the feature distribution because the feature of the data differs due to differences in data acquisition conditions.</p><p><strong>Methods: </strong>To unify the feature distribution, the data from Institution T are transformed to have the closer distribution to that from Institution S by applying a domain transformation using semi-supervised CycleGAN. The proposed method enhances CycleGAN considering the feature distribution of classes for making appropriate domain transformation for classification. In addition, to address the problem of imbalanced data with different numbers of data for each cancer type, several methods dealing with imbalanced data are applied to semi-supervised CycleGAN.</p><p><strong>Results: </strong>The experimental results showed that the classification performance was enhanced when the dataset from Institution S was used as training data and the testing dataset from Institution T was classified after applying domain transformation. In addition, focal loss contributed to improving the mean F1 score the best as a method that addresses the class imbalance.</p><p><strong>Conclusion: </strong>The proposed method achieved the domain transformation of thyroid tissue images between two domains, where it retained the important features related to the classes across domains and showed the best F1 score with significant differences compared with other methods. In addition, the proposed method was further enhanced by addressing the class imbalance of the dataset.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"2153-2163"},"PeriodicalIF":2.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139492884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep learning-based automatic pipeline for 3D needle localization on intra-procedural 3D MRI. 基于深度学习的自动管道,用于术中三维磁共振成像的三维针定位。
IF 2.3 3区 医学 Q3 ENGINEERING, BIOMEDICAL Pub Date : 2024-11-01 Epub Date: 2024-03-23 DOI: 10.1007/s11548-024-03077-3
Wenqi Zhou, Xinzhou Li, Fatemeh Zabihollahy, David S Lu, Holden H Wu

Purpose: Accurate and rapid needle localization on 3D magnetic resonance imaging (MRI) is critical for MRI-guided percutaneous interventions. The current workflow requires manual needle localization on 3D MRI, which is time-consuming and cumbersome. Automatic methods using 2D deep learning networks for needle segmentation require manual image plane localization, while 3D networks are challenged by the need for sufficient training datasets. This work aimed to develop an automatic deep learning-based pipeline for accurate and rapid 3D needle localization on in vivo intra-procedural 3D MRI using a limited training dataset.

Methods: The proposed automatic pipeline adopted Shifted Window (Swin) Transformers and employed a coarse-to-fine segmentation strategy: (1) initial 3D needle feature segmentation with 3D Swin UNEt TRansfomer (UNETR); (2) generation of a 2D reformatted image containing the needle feature; (3) fine 2D needle feature segmentation with 2D Swin Transformer and calculation of 3D needle tip position and axis orientation. Pre-training and data augmentation were performed to improve network training. The pipeline was evaluated via cross-validation with 49 in vivo intra-procedural 3D MR images from preclinical pig experiments. The needle tip and axis localization errors were compared with human intra-reader variation using the Wilcoxon signed rank test, with p < 0.05 considered significant.

Results: The average end-to-end computational time for the pipeline was 6 s per 3D volume. The median Dice scores of the 3D Swin UNETR and 2D Swin Transformer in the pipeline were 0.80 and 0.93, respectively. The median 3D needle tip and axis localization errors were 1.48 mm (1.09 pixels) and 0.98°, respectively. Needle tip localization errors were significantly smaller than human intra-reader variation (median 1.70 mm; p < 0.01).

Conclusion: The proposed automatic pipeline achieved rapid pixel-level 3D needle localization on intra-procedural 3D MRI without requiring a large 3D training dataset and has the potential to assist MRI-guided percutaneous interventions.

目的:在三维磁共振成像(MRI)上准确、快速地定位穿刺针对于 MRI 引导下的经皮介入治疗至关重要。目前的工作流程需要在三维核磁共振成像上进行手动针定位,既费时又繁琐。使用二维深度学习网络进行针头分割的自动方法需要人工图像平面定位,而三维网络则因需要足够的训练数据集而面临挑战。这项工作旨在开发一种基于深度学习的自动流水线,利用有限的训练数据集,在活体术中三维核磁共振成像上实现准确、快速的三维针定位:拟议的自动流水线采用移位窗(Swin)变换器,并采用从粗到细的分割策略:(1) 使用 3D Swin UNEt TRansfomer (UNETR)进行初始三维针特征分割;(2) 生成包含针特征的二维重新格式化图像;(3) 使用 2D Swin 变换器进行精细二维针特征分割,并计算三维针尖位置和轴方向。为改进网络训练,还进行了预训练和数据增强。通过对临床前猪实验中的 49 幅体内术中三维 MR 图像进行交叉验证,对该管道进行了评估。使用 Wilcoxon 符号秩检验比较了针尖和针轴定位误差与人类读取器内部差异,结果为 p:每个三维卷的端到端计算时间平均为 6 秒。管道中三维 Swin UNETR 和二维 Swin Transformer 的中位 Dice 分数分别为 0.80 和 0.93。三维针尖和轴定位误差的中位数分别为 1.48 毫米(1.09 像素)和 0.98°。针尖定位误差明显小于人为读取器内部的误差(中位数为 1.70 毫米;p 结论:针尖定位误差和轴定位误差的中位数分别为 1.48 毫米(1.09 像素)和 0.98°:所提出的自动管道无需大量三维训练数据集即可在术中三维 MRI 上实现快速像素级三维针定位,有望为 MRI 引导下的经皮介入治疗提供帮助。
{"title":"Deep learning-based automatic pipeline for 3D needle localization on intra-procedural 3D MRI.","authors":"Wenqi Zhou, Xinzhou Li, Fatemeh Zabihollahy, David S Lu, Holden H Wu","doi":"10.1007/s11548-024-03077-3","DOIUrl":"10.1007/s11548-024-03077-3","url":null,"abstract":"<p><strong>Purpose: </strong>Accurate and rapid needle localization on 3D magnetic resonance imaging (MRI) is critical for MRI-guided percutaneous interventions. The current workflow requires manual needle localization on 3D MRI, which is time-consuming and cumbersome. Automatic methods using 2D deep learning networks for needle segmentation require manual image plane localization, while 3D networks are challenged by the need for sufficient training datasets. This work aimed to develop an automatic deep learning-based pipeline for accurate and rapid 3D needle localization on in vivo intra-procedural 3D MRI using a limited training dataset.</p><p><strong>Methods: </strong>The proposed automatic pipeline adopted Shifted Window (Swin) Transformers and employed a coarse-to-fine segmentation strategy: (1) initial 3D needle feature segmentation with 3D Swin UNEt TRansfomer (UNETR); (2) generation of a 2D reformatted image containing the needle feature; (3) fine 2D needle feature segmentation with 2D Swin Transformer and calculation of 3D needle tip position and axis orientation. Pre-training and data augmentation were performed to improve network training. The pipeline was evaluated via cross-validation with 49 in vivo intra-procedural 3D MR images from preclinical pig experiments. The needle tip and axis localization errors were compared with human intra-reader variation using the Wilcoxon signed rank test, with p < 0.05 considered significant.</p><p><strong>Results: </strong>The average end-to-end computational time for the pipeline was 6 s per 3D volume. The median Dice scores of the 3D Swin UNETR and 2D Swin Transformer in the pipeline were 0.80 and 0.93, respectively. The median 3D needle tip and axis localization errors were 1.48 mm (1.09 pixels) and 0.98°, respectively. Needle tip localization errors were significantly smaller than human intra-reader variation (median 1.70 mm; p < 0.01).</p><p><strong>Conclusion: </strong>The proposed automatic pipeline achieved rapid pixel-level 3D needle localization on intra-procedural 3D MRI without requiring a large 3D training dataset and has the potential to assist MRI-guided percutaneous interventions.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"2227-2237"},"PeriodicalIF":2.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11541278/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140195078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
High-quality semi-supervised anomaly detection with generative adversarial networks. 使用生成对抗性网络进行高质量的半监督异常检测。
IF 2.3 3区 医学 Q3 ENGINEERING, BIOMEDICAL Pub Date : 2024-11-01 Epub Date: 2023-11-09 DOI: 10.1007/s11548-023-03031-9
Yuki Sato, Junya Sato, Noriyuki Tomiyama, Shoji Kido

Purpose: The visualization of an anomaly area is easier in anomaly detection methods that use generative models rather than classification models. However, achieving both anomaly detection accuracy and a clear visualization of anomalous areas is challenging. This study aimed to establish a method that combines both detection accuracy and clear visualization of anomalous areas using a generative adversarial network (GAN).

Methods: In this study, StyleGAN2 with adaptive discriminator augmentation (StyleGAN2-ADA), which can generate high-resolution and high-quality images with limited number of datasets, was used as the image generation model, and pixel-to-style-to-pixel (pSp) encoder was used to convert images into intermediate latent variables. We combined existing methods for training and proposed a method for calculating anomaly scores using intermediate latent variables. The proposed method, which combines these two methods, is called high-quality anomaly GAN (HQ-AnoGAN).

Results: The experimental results obtained using three datasets demonstrated that HQ-AnoGAN has equal or better detection accuracy than the existing methods. The results of the visualization of abnormal areas using the generated images showed that HQ-AnoGAN could generate more natural images than the existing methods and was qualitatively more accurate in the visualization of abnormal areas.

Conclusion: In this study, HQ-AnoGAN comprising StyleGAN2-ADA and pSp encoder was proposed with an optimal anomaly score calculation method. The experimental results show that HQ-AnoGAN can achieve both high abnormality detection accuracy and clear visualization of abnormal areas; thus, HQ-AnoGAN demonstrates significant potential for application in medical imaging diagnosis cases where an explanation of diagnosis is required.

目的:在使用生成模型而不是分类模型的异常检测方法中,异常区域的可视化更容易。然而,实现异常检测的准确性和异常区域的清晰可视化是具有挑战性的。本研究旨在建立一种使用生成对抗性网络(GAN)将异常区域的检测精度和清晰可视化相结合的方法。方法:本研究使用具有自适应鉴别器增强的StyleGAN2(StyleGAN2-ADA)作为图像生成模型,该模型可以在有限的数据集数量下生成高分辨率和高质量的图像,并且使用像素到风格到像素(pSp)编码器将图像转换为中间潜在变量。我们结合了现有的训练方法,提出了一种使用中间潜在变量计算异常分数的方法。将这两种方法相结合的方法被称为高质量异常GAN(HQ AnoGAN)。结果:使用三个数据集获得的实验结果表明,HQ AnoGAN具有与现有方法相同或更好的检测精度。使用生成的图像对异常区域进行可视化的结果表明,HQ AnoGAN可以生成比现有方法更自然的图像,并且在异常区域的可视化中定性地更准确。结论:本研究提出了由StyleGAN2-ADA和pSp编码器组成的HQ AnoGAN,并提出了一种最佳异常评分计算方法。实验结果表明,HQ AnoGAN可以实现高的异常检测精度和异常区域的清晰可视化;因此,HQ AnoGAN在需要解释诊断的医学影像诊断病例中显示出巨大的应用潜力。
{"title":"High-quality semi-supervised anomaly detection with generative adversarial networks.","authors":"Yuki Sato, Junya Sato, Noriyuki Tomiyama, Shoji Kido","doi":"10.1007/s11548-023-03031-9","DOIUrl":"10.1007/s11548-023-03031-9","url":null,"abstract":"<p><strong>Purpose: </strong>The visualization of an anomaly area is easier in anomaly detection methods that use generative models rather than classification models. However, achieving both anomaly detection accuracy and a clear visualization of anomalous areas is challenging. This study aimed to establish a method that combines both detection accuracy and clear visualization of anomalous areas using a generative adversarial network (GAN).</p><p><strong>Methods: </strong>In this study, StyleGAN2 with adaptive discriminator augmentation (StyleGAN2-ADA), which can generate high-resolution and high-quality images with limited number of datasets, was used as the image generation model, and pixel-to-style-to-pixel (pSp) encoder was used to convert images into intermediate latent variables. We combined existing methods for training and proposed a method for calculating anomaly scores using intermediate latent variables. The proposed method, which combines these two methods, is called high-quality anomaly GAN (HQ-AnoGAN).</p><p><strong>Results: </strong>The experimental results obtained using three datasets demonstrated that HQ-AnoGAN has equal or better detection accuracy than the existing methods. The results of the visualization of abnormal areas using the generated images showed that HQ-AnoGAN could generate more natural images than the existing methods and was qualitatively more accurate in the visualization of abnormal areas.</p><p><strong>Conclusion: </strong>In this study, HQ-AnoGAN comprising StyleGAN2-ADA and pSp encoder was proposed with an optimal anomaly score calculation method. The experimental results show that HQ-AnoGAN can achieve both high abnormality detection accuracy and clear visualization of abnormal areas; thus, HQ-AnoGAN demonstrates significant potential for application in medical imaging diagnosis cases where an explanation of diagnosis is required.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"2121-2131"},"PeriodicalIF":2.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"71523347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Challenges in multi-centric generalization: phase and step recognition in Roux-en-Y gastric bypass surgery. 多中心泛化的挑战:Roux-en-Y 胃旁路手术中的阶段和步骤识别。
IF 2.3 3区 医学 Q3 ENGINEERING, BIOMEDICAL Pub Date : 2024-11-01 Epub Date: 2024-05-18 DOI: 10.1007/s11548-024-03166-3
Joël L Lavanchy, Sanat Ramesh, Diego Dall'Alba, Cristians Gonzalez, Paolo Fiorini, Beat P Müller-Stich, Philipp C Nett, Jacques Marescaux, Didier Mutter, Nicolas Padoy

Purpose: Most studies on surgical activity recognition utilizing artificial intelligence (AI) have focused mainly on recognizing one type of activity from small and mono-centric surgical video datasets. It remains speculative whether those models would generalize to other centers.

Methods: In this work, we introduce a large multi-centric multi-activity dataset consisting of 140 surgical videos (MultiBypass140) of laparoscopic Roux-en-Y gastric bypass (LRYGB) surgeries performed at two medical centers, i.e., the University Hospital of Strasbourg, France (StrasBypass70) and Inselspital, Bern University Hospital, Switzerland (BernBypass70). The dataset has been fully annotated with phases and steps by two board-certified surgeons. Furthermore, we assess the generalizability and benchmark different deep learning models for the task of phase and step recognition in 7 experimental studies: (1) Training and evaluation on BernBypass70; (2) Training and evaluation on StrasBypass70; (3) Training and evaluation on the joint MultiBypass140 dataset; (4) Training on BernBypass70, evaluation on StrasBypass70; (5) Training on StrasBypass70, evaluation on BernBypass70; Training on MultiBypass140, (6) evaluation on BernBypass70 and (7) evaluation on StrasBypass70.

Results: The model's performance is markedly influenced by the training data. The worst results were obtained in experiments (4) and (5) confirming the limited generalization capabilities of models trained on mono-centric data. The use of multi-centric training data, experiments (6) and (7), improves the generalization capabilities of the models, bringing them beyond the level of independent mono-centric training and validation (experiments (1) and (2)).

Conclusion: MultiBypass140 shows considerable variation in surgical technique and workflow of LRYGB procedures between centers. Therefore, generalization experiments demonstrate a remarkable difference in model performance. These results highlight the importance of multi-centric datasets for AI model generalization to account for variance in surgical technique and workflows. The dataset and code are publicly available at https://github.com/CAMMA-public/MultiBypass140.

目的:大多数利用人工智能(AI)进行手术活动识别的研究主要集中在从小型和单一中心的手术视频数据集中识别一种类型的活动。这些模型是否能推广到其他中心仍是个未知数:在这项工作中,我们引入了一个大型多中心多活动数据集,该数据集由两个医疗中心,即法国斯特拉斯堡大学医院(StrasBypass70)和瑞士伯尔尼大学医院(BernBypass70)的 140 个腹腔镜鲁-恩-Y 胃旁路(LRYGB)手术视频(MultiBypass140)组成。该数据集已由两名获得认证的外科医生对阶段和步骤进行了全面注释。此外,我们还在 7 项实验研究中评估了不同深度学习模型在相位和步骤识别任务中的通用性和基准:(1) 在 BernBypass70 上进行训练和评估;(2) 在 StrasBypass70 上进行训练和评估;(3) 在 MultiBypass140 联合数据集上进行训练和评估;(4) 在 BernBypass70 上进行训练,在 StrasBypass70 上进行评估;(5) 在 StrasBypass70 上进行训练,在 BernBypass70 上进行评估;在 MultiBypass140 上进行训练,(6) 在 BernBypass70 上进行评估,以及 (7) 在 StrasBypass70 上进行评估。结果:模型的性能明显受到训练数据的影响。实验(4)和(5)的结果最差,这证实了在单中心数据上训练的模型的泛化能力有限。在实验(6)和(7)中使用多中心训练数据提高了模型的泛化能力,使其超过了独立的单中心训练和验证(实验(1)和(2))的水平:结论:MultiBypass140 显示,不同中心在 LRYGB 手术的手术技术和工作流程方面存在很大差异。因此,归纳实验证明了模型性能的显著差异。这些结果凸显了多中心数据集对人工智能模型泛化的重要性,以考虑手术技术和工作流程的差异。数据集和代码可在 https://github.com/CAMMA-public/MultiBypass140 公开获取。
{"title":"Challenges in multi-centric generalization: phase and step recognition in Roux-en-Y gastric bypass surgery.","authors":"Joël L Lavanchy, Sanat Ramesh, Diego Dall'Alba, Cristians Gonzalez, Paolo Fiorini, Beat P Müller-Stich, Philipp C Nett, Jacques Marescaux, Didier Mutter, Nicolas Padoy","doi":"10.1007/s11548-024-03166-3","DOIUrl":"10.1007/s11548-024-03166-3","url":null,"abstract":"<p><strong>Purpose: </strong>Most studies on surgical activity recognition utilizing artificial intelligence (AI) have focused mainly on recognizing one type of activity from small and mono-centric surgical video datasets. It remains speculative whether those models would generalize to other centers.</p><p><strong>Methods: </strong>In this work, we introduce a large multi-centric multi-activity dataset consisting of 140 surgical videos (MultiBypass140) of laparoscopic Roux-en-Y gastric bypass (LRYGB) surgeries performed at two medical centers, i.e., the University Hospital of Strasbourg, France (StrasBypass70) and Inselspital, Bern University Hospital, Switzerland (BernBypass70). The dataset has been fully annotated with phases and steps by two board-certified surgeons. Furthermore, we assess the generalizability and benchmark different deep learning models for the task of phase and step recognition in 7 experimental studies: (1) Training and evaluation on BernBypass70; (2) Training and evaluation on StrasBypass70; (3) Training and evaluation on the joint MultiBypass140 dataset; (4) Training on BernBypass70, evaluation on StrasBypass70; (5) Training on StrasBypass70, evaluation on BernBypass70; Training on MultiBypass140, (6) evaluation on BernBypass70 and (7) evaluation on StrasBypass70.</p><p><strong>Results: </strong>The model's performance is markedly influenced by the training data. The worst results were obtained in experiments (4) and (5) confirming the limited generalization capabilities of models trained on mono-centric data. The use of multi-centric training data, experiments (6) and (7), improves the generalization capabilities of the models, bringing them beyond the level of independent mono-centric training and validation (experiments (1) and (2)).</p><p><strong>Conclusion: </strong>MultiBypass140 shows considerable variation in surgical technique and workflow of LRYGB procedures between centers. Therefore, generalization experiments demonstrate a remarkable difference in model performance. These results highlight the importance of multi-centric datasets for AI model generalization to account for variance in surgical technique and workflows. The dataset and code are publicly available at https://github.com/CAMMA-public/MultiBypass140.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"2249-2257"},"PeriodicalIF":2.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11541311/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140959178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
International Journal of Computer Assisted Radiology and Surgery
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1