首页 > 最新文献

Journal of imaging informatics in medicine最新文献

英文 中文
Enhancing Radiology Clinical Histories Through Transformer-Based Automated Clinical Note Summarization. 通过基于变压器的自动临床记录总结增强放射学临床病史。
Pub Date : 2026-02-01 Epub Date: 2025-04-07 DOI: 10.1007/s10278-025-01477-8
Niloufar Eghbali, Chad Klochko, Zaid Mahdi, Laith Alhiari, Jonathan Lee, Beatrice Knisely, Joseph Craig, Mohammad M Ghassemi

Insufficient clinical information provided in radiology requests, coupled with the cumbersome nature of electronic health records (EHRs), poses significant challenges for radiologists in extracting pertinent clinical data and compiling detailed radiology reports. Considering the challenges and time involved in navigating electronic medical records (EMR), an automated method to accurately compress the text while maintaining key semantic information could significantly enhance the efficiency of radiologists' workflow. The purpose of this study is to develop and demonstrate an automated tool for clinical note summarization with the goal of extracting the most pertinent clinical information for the radiological assessments. We adopted a transfer learning methodology from the natural language processing domain to fine-tune a transformer model for abstracting clinical reports. We employed a dataset consisting of 1000 clinical notes from 970 patients who underwent knee MRI, all manually summarized by radiologists. The fine-tuning process involved a two-stage approach starting with self-supervised denoising and then focusing on the summarization task. The model successfully condensed clinical notes by 97% while aligning closely with radiologist-written summaries evidenced by a 0.9 cosine similarity and a ROUGE-1 score of 40.18. In addition, statistical analysis, indicated by a Fleiss kappa score of 0.32, demonstrated fair agreement among specialists on the model's effectiveness in producing more relevant clinical histories compared to those included in the exam requests. The proposed model effectively summarized clinical notes for knee MRI studies, thereby demonstrating potential for improving radiology reporting efficiency and accuracy.

放射学请求中提供的临床信息不足,加上电子健康记录(EHRs)的繁琐性质,给放射科医生在提取相关临床数据和编制详细的放射学报告方面带来了重大挑战。考虑到电子病历(EMR)导航所涉及的挑战和时间,在保持关键语义信息的同时准确压缩文本的自动化方法可以显著提高放射科医生的工作效率。本研究的目的是开发和演示一种用于临床记录总结的自动化工具,目的是为放射评估提取最相关的临床信息。我们采用了自然语言处理领域的迁移学习方法来微调用于临床报告摘要的转换模型。我们使用了一个由970名接受膝关节MRI的患者的1000个临床记录组成的数据集,所有这些记录都是由放射科医生手工汇总的。微调过程包括两个阶段的方法,从自监督去噪开始,然后专注于摘要任务。该模型成功地将临床记录浓缩了97%,同时与放射科医生撰写的总结密切一致,证明了0.9余弦相似性和ROUGE-1评分为40.18。此外,Fleiss kappa评分为0.32的统计分析表明,与检查请求中包含的内容相比,专家们对该模型在产生更多相关临床病史方面的有效性达成了公平的共识。该模型有效地总结了膝关节MRI研究的临床记录,从而展示了提高放射学报告效率和准确性的潜力。
{"title":"Enhancing Radiology Clinical Histories Through Transformer-Based Automated Clinical Note Summarization.","authors":"Niloufar Eghbali, Chad Klochko, Zaid Mahdi, Laith Alhiari, Jonathan Lee, Beatrice Knisely, Joseph Craig, Mohammad M Ghassemi","doi":"10.1007/s10278-025-01477-8","DOIUrl":"10.1007/s10278-025-01477-8","url":null,"abstract":"<p><p>Insufficient clinical information provided in radiology requests, coupled with the cumbersome nature of electronic health records (EHRs), poses significant challenges for radiologists in extracting pertinent clinical data and compiling detailed radiology reports. Considering the challenges and time involved in navigating electronic medical records (EMR), an automated method to accurately compress the text while maintaining key semantic information could significantly enhance the efficiency of radiologists' workflow. The purpose of this study is to develop and demonstrate an automated tool for clinical note summarization with the goal of extracting the most pertinent clinical information for the radiological assessments. We adopted a transfer learning methodology from the natural language processing domain to fine-tune a transformer model for abstracting clinical reports. We employed a dataset consisting of 1000 clinical notes from 970 patients who underwent knee MRI, all manually summarized by radiologists. The fine-tuning process involved a two-stage approach starting with self-supervised denoising and then focusing on the summarization task. The model successfully condensed clinical notes by 97% while aligning closely with radiologist-written summaries evidenced by a 0.9 cosine similarity and a ROUGE-1 score of 40.18. In addition, statistical analysis, indicated by a Fleiss kappa score of 0.32, demonstrated fair agreement among specialists on the model's effectiveness in producing more relevant clinical histories compared to those included in the exam requests. The proposed model effectively summarized clinical notes for knee MRI studies, thereby demonstrating potential for improving radiology reporting efficiency and accuracy.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"1031-1039"},"PeriodicalIF":0.0,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12921103/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143805120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Gaussian Function Model for Task-Specific Evaluation in Medical Imaging: A Theoretical Investigation. 高斯函数模型在医学影像任务特定评估:理论研究。
Pub Date : 2026-02-01 Epub Date: 2025-04-24 DOI: 10.1007/s10278-025-01511-9
Sho Maruyama

In medical image diagnosis, understanding image characteristics is crucial for selecting and optimizing imaging systems and advancing their development. Objective image quality assessments, based on specific diagnostic tasks, have become a standard in medical image analysis, bridging the gap between experimental observations and clinical applications. However, conventional task-based assessments often rely on ideal observer models that assume target signals have circular shapes with well-defined edges. This simplification rarely reflects the true complexity of lesion morphology, where edges exhibit variability. This study proposes a more practical approach by employing a Gaussian distribution to represent target signal shapes. This study explicitly derives the task function for Gaussian signals and evaluates the detectability index through simulations based on head computed tomography (CT) images with low-contrast lesions. Detectability indices were calculated for both circular and Gaussian signals using non-prewhitening and Hotelling observer models. The results demonstrate that Gaussian signals consistently exhibit lower detectability indices compared to circular signals, with differences becoming more pronounced for larger signal sizes. Simulated images closely resembling actual CT images confirm the validity of these calculations. These findings quantitatively clarify the influence of signal shape on detection performance, highlighting the limitations of conventional circular models. Thus, it provides a theoretical framework for task-based assessments in medical imaging, offering improved accuracy and clinical relevance for future advancements in the field.

在医学影像诊断中,了解影像特征对于选择和优化影像系统、推进影像系统的发展至关重要。基于特定诊断任务的客观图像质量评估已成为医学图像分析的标准,弥合了实验观察与临床应用之间的差距。然而,传统的基于任务的评估通常依赖于理想的观察者模型,该模型假设目标信号具有具有明确边缘的圆形。这种简化很少反映病变形态的真正复杂性,其中边缘表现出可变性。本研究提出了一种更实用的方法,即采用高斯分布来表示目标信号的形状。本研究明确推导了高斯信号的任务函数,并通过模拟低对比度病变的头部计算机断层扫描(CT)图像来评估可检测性指数。利用非预白化和Hotelling观测器模型计算了圆形和高斯信号的可探测性指数。结果表明,与圆形信号相比,高斯信号始终表现出较低的可检测性指数,当信号尺寸较大时,差异变得更加明显。模拟图像与实际CT图像非常相似,证实了这些计算的有效性。这些发现定量地阐明了信号形状对检测性能的影响,突出了传统圆形模型的局限性。因此,它为基于任务的医学成像评估提供了一个理论框架,为该领域的未来发展提供了更高的准确性和临床相关性。
{"title":"Gaussian Function Model for Task-Specific Evaluation in Medical Imaging: A Theoretical Investigation.","authors":"Sho Maruyama","doi":"10.1007/s10278-025-01511-9","DOIUrl":"10.1007/s10278-025-01511-9","url":null,"abstract":"<p><p>In medical image diagnosis, understanding image characteristics is crucial for selecting and optimizing imaging systems and advancing their development. Objective image quality assessments, based on specific diagnostic tasks, have become a standard in medical image analysis, bridging the gap between experimental observations and clinical applications. However, conventional task-based assessments often rely on ideal observer models that assume target signals have circular shapes with well-defined edges. This simplification rarely reflects the true complexity of lesion morphology, where edges exhibit variability. This study proposes a more practical approach by employing a Gaussian distribution to represent target signal shapes. This study explicitly derives the task function for Gaussian signals and evaluates the detectability index through simulations based on head computed tomography (CT) images with low-contrast lesions. Detectability indices were calculated for both circular and Gaussian signals using non-prewhitening and Hotelling observer models. The results demonstrate that Gaussian signals consistently exhibit lower detectability indices compared to circular signals, with differences becoming more pronounced for larger signal sizes. Simulated images closely resembling actual CT images confirm the validity of these calculations. These findings quantitatively clarify the influence of signal shape on detection performance, highlighting the limitations of conventional circular models. Thus, it provides a theoretical framework for task-based assessments in medical imaging, offering improved accuracy and clinical relevance for future advancements in the field.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"794-804"},"PeriodicalIF":0.0,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12920941/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144047620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing Breast Cancer Detection Through Optimized Thermal Image Analysis Using PRMS-Net Deep Learning Approach. 利用PRMS-Net深度学习方法优化热图像分析增强乳腺癌检测。
Pub Date : 2026-02-01 Epub Date: 2025-05-06 DOI: 10.1007/s10278-025-01465-y
Mudassir Khan, Mazliham Mohd Su'ud, Muhammad Mansoor Alam, Shaik Karimullah, Fahimuddin Shaik, Fazli Subhan

Breast cancer has remained one of the most frequent and life-threatening cancers in females globally, putting emphasis on better diagnostics in its early stages to solve the problem of therapy effectiveness and survival. This work enhances the assessment of breast cancer by employing progressive residual networks (PRN) and ResNet-50 within the framework of Progressive Residual Multi-Class Support Vector Machine-Net. Built on concepts of deep learning, this creative integration optimizes feature extraction and raises the bar for classification effectiveness, earning an almost perfect 99.63% on our tests. These findings indicate that PRMS-Net can serve as an efficient and reliable diagnostic tool for early breast cancer detection, aiding radiologists in improving diagnostic accuracy and reducing false positives. The separation of the data into different segments is possible to determine the architecture's reliability using the fivefold cross-validation approach. The total variability of precision, recall, and F1 scores clearly depicted in the box plot also endorse the competency of the model for marking proper sensitivity and specificity-highly required for combating false positive and false negative cases in real clinical practice. The evaluation of error distribution strengthens the model's rationale by giving validation of practical application in medical contexts of image processing. The high levels of feature extraction sensitivity together with highly sophisticated classification methods make PRMS-Net a powerful tool that can be used in improving the early detection of breast cancer and subsequent patient prognosis.

乳腺癌仍然是全球女性中最常见和危及生命的癌症之一,重点是在早期阶段更好地诊断,以解决治疗效果和生存问题。本研究通过在渐进式残差多类支持向量机网络框架内采用渐进式残差网络(PRN)和ResNet-50来增强乳腺癌的评估。基于深度学习的概念,这种创造性的集成优化了特征提取,提高了分类效率的标准,在我们的测试中获得了近乎完美的99.63%。这些发现表明,PRMS-Net可以作为早期乳腺癌检测的有效和可靠的诊断工具,帮助放射科医生提高诊断准确性和减少假阳性。使用五重交叉验证方法,将数据分离到不同的部分可以确定体系结构的可靠性。箱形图中清晰描述的精度、召回率和F1分数的总可变性也支持该模型在标记适当的敏感性和特异性方面的能力——这是在实际临床实践中打击假阳性和假阴性病例所高度需要的。误差分布的评估通过验证图像处理在医学环境中的实际应用,加强了模型的理论基础。高水平的特征提取灵敏度和高度复杂的分类方法使PRMS-Net成为一种强有力的工具,可用于改善乳腺癌的早期发现和随后的患者预后。
{"title":"Enhancing Breast Cancer Detection Through Optimized Thermal Image Analysis Using PRMS-Net Deep Learning Approach.","authors":"Mudassir Khan, Mazliham Mohd Su'ud, Muhammad Mansoor Alam, Shaik Karimullah, Fahimuddin Shaik, Fazli Subhan","doi":"10.1007/s10278-025-01465-y","DOIUrl":"10.1007/s10278-025-01465-y","url":null,"abstract":"<p><p>Breast cancer has remained one of the most frequent and life-threatening cancers in females globally, putting emphasis on better diagnostics in its early stages to solve the problem of therapy effectiveness and survival. This work enhances the assessment of breast cancer by employing progressive residual networks (PRN) and ResNet-50 within the framework of Progressive Residual Multi-Class Support Vector Machine-Net. Built on concepts of deep learning, this creative integration optimizes feature extraction and raises the bar for classification effectiveness, earning an almost perfect 99.63% on our tests. These findings indicate that PRMS-Net can serve as an efficient and reliable diagnostic tool for early breast cancer detection, aiding radiologists in improving diagnostic accuracy and reducing false positives. The separation of the data into different segments is possible to determine the architecture's reliability using the fivefold cross-validation approach. The total variability of precision, recall, and F1 scores clearly depicted in the box plot also endorse the competency of the model for marking proper sensitivity and specificity-highly required for combating false positive and false negative cases in real clinical practice. The evaluation of error distribution strengthens the model's rationale by giving validation of practical application in medical contexts of image processing. The high levels of feature extraction sensitivity together with highly sophisticated classification methods make PRMS-Net a powerful tool that can be used in improving the early detection of breast cancer and subsequent patient prognosis.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"864-883"},"PeriodicalIF":0.0,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12920824/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144056751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhanced Pelvic CT Segmentation via Deep Learning: A Study on Loss Function Effects. 基于深度学习的增强骨盆CT分割:损失函数效应的研究。
Pub Date : 2026-02-01 Epub Date: 2025-05-29 DOI: 10.1007/s10278-025-01550-2
Elnaz Ghaedi, Ali Asadi, Seyed Abolfazl Hosseini, Hossein Arabi

Effective radiotherapy planning requires precise delineation of organs at risk (OARs), but the traditional manual method is laborious and subject to variability. This study explores using convolutional neural networks (CNNs) for automating OAR segmentation in pelvic CT images, focusing on the bladder, prostate, rectum, and femoral heads (FHs) as an efficient alternative to manual segmentation. Utilizing the Medical Open Network for AI (MONAI) framework, we implemented and compared U-Net, ResU-Net, SegResNet, and Attention U-Net models and explored different loss functions to enhance segmentation accuracy. Our study involved 240 patients for prostate segmentation and 220 patients for the other organs. The models' performance was evaluated using metrics such as the Dice similarity coefficient (DSC), Jaccard index (JI), and the 95th percentile Hausdorff distance (95thHD), benchmarking the results against expert segmentation masks. SegResNet outperformed all models, achieving DSC values of 0.951 for the bladder, 0.829 for the prostate, 0.860 for the rectum, 0.979 for the left FH, and 0.985 for the right FH (p < 0.05 vs. U-Net and ResU-Net). Attention U-Net also excelled, particularly for bladder and rectum segmentation. Experiments with loss functions on SegResNet showed that Dice loss consistently delivered optimal or equivalent performance across OARs, while DiceCE slightly enhanced prostate segmentation (DSC = 0.845, p = 0.0138). These results indicate that advanced CNNs, especially SegResNet, paired with optimized loss functions, provide a reliable, efficient alternative to manual methods, promising improved precision in radiotherapy planning.

有效的放射治疗计划需要精确地描绘危险器官(OARs),但传统的手工方法是费力的,并且受到变化的影响。本研究探讨了使用卷积神经网络(cnn)对骨盆CT图像进行自动分割,重点关注膀胱、前列腺、直肠和股骨头(FHs),作为人工分割的有效替代方法。利用医学开放人工智能网络(MONAI)框架,我们实现并比较了U-Net、ResU-Net、SegResNet和Attention U-Net模型,并探索了不同的损失函数来提高分割精度。我们的研究包括240例前列腺分割患者和220例其他器官分割患者。使用骰子相似系数(DSC)、Jaccard指数(JI)和第95百分位豪斯多夫距离(95thHD)等指标对模型的性能进行评估,并将结果与专家分割掩码进行基准测试。SegResNet优于所有模型,膀胱的DSC值为0.951,前列腺为0.829,直肠为0.860,左侧FH为0.979,右侧FH为0.985
{"title":"Enhanced Pelvic CT Segmentation via Deep Learning: A Study on Loss Function Effects.","authors":"Elnaz Ghaedi, Ali Asadi, Seyed Abolfazl Hosseini, Hossein Arabi","doi":"10.1007/s10278-025-01550-2","DOIUrl":"10.1007/s10278-025-01550-2","url":null,"abstract":"<p><p>Effective radiotherapy planning requires precise delineation of organs at risk (OARs), but the traditional manual method is laborious and subject to variability. This study explores using convolutional neural networks (CNNs) for automating OAR segmentation in pelvic CT images, focusing on the bladder, prostate, rectum, and femoral heads (FHs) as an efficient alternative to manual segmentation. Utilizing the Medical Open Network for AI (MONAI) framework, we implemented and compared U-Net, ResU-Net, SegResNet, and Attention U-Net models and explored different loss functions to enhance segmentation accuracy. Our study involved 240 patients for prostate segmentation and 220 patients for the other organs. The models' performance was evaluated using metrics such as the Dice similarity coefficient (DSC), Jaccard index (JI), and the 95th percentile Hausdorff distance (95thHD), benchmarking the results against expert segmentation masks. SegResNet outperformed all models, achieving DSC values of 0.951 for the bladder, 0.829 for the prostate, 0.860 for the rectum, 0.979 for the left FH, and 0.985 for the right FH (p < 0.05 vs. U-Net and ResU-Net). Attention U-Net also excelled, particularly for bladder and rectum segmentation. Experiments with loss functions on SegResNet showed that Dice loss consistently delivered optimal or equivalent performance across OARs, while DiceCE slightly enhanced prostate segmentation (DSC = 0.845, p = 0.0138). These results indicate that advanced CNNs, especially SegResNet, paired with optimized loss functions, provide a reliable, efficient alternative to manual methods, promising improved precision in radiotherapy planning.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"422-435"},"PeriodicalIF":0.0,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12921067/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144176489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Machine learning-based model assists in differentiating Mycobacterium avium Complex Pulmonary Disease from Pulmonary Tuberculosis: A Multicenter Study. 基于机器学习的模型有助于区分鸟分枝杆菌复杂肺部疾病和肺结核:一项多中心研究。
Pub Date : 2026-02-01 Epub Date: 2025-04-01 DOI: 10.1007/s10278-025-01486-7
Jiacheng Zhang, Tingting Huang, Xu He, Dingsheng Han, Qian Xu, Fukun Shi, Lan Zhang, Dailun Hou

The number of Mycobacterium avium-intracellulare complex pulmonary disease patients is increasing globally. Distinguishing Mycobacterium avium-intracellulare complex pulmonary disease from pulmonary tuberculosis is difficult due to similar manifestations and characteristics. We aimed to build and validate a machine learning model using clinical data and computed tomography features to differentiate them. This multi-centered, retrospective study included 169 patients diagnosed with Mycobacterium avium-intracellulare complex and pulmonary tuberculosis from date to date. Data were analyzed, and logistic regression, random forest, and support vector machine models were established and validated. Performance was evaluated using receiver operating characteristic and precision-recall curves. In total, 84 patients with Mycobacterium avium-intracellulare complex pulmonary disease and 85 with pulmonary tuberculosis were analyzed. Patients with Mycobacterium avium-intracellulare complex pulmonary disease were older. Hemoptysis rate, cavity number and morphology, bronchiectasis type, and distribution differed. The support vector machine model performed better. In the training set, the area under the curve was 0.960, and in the validation set it was 0.885. The precision-recall curve showed high accuracy and low recall for the support vector machine model. The support vector machine learning-based model, which integrates clinical data and computed tomography imaging features, exhibited excellent diagnostic performance and can assist in differentiating Mycobacterium avium-intracellulare complex pulmonary disease from pulmonary tuberculosis.

在全球范围内,鸟分枝杆菌-细胞内复杂性肺病患者的数量正在增加。鸟分枝杆菌-细胞内复合体肺部疾病与肺结核因其相似的表现和特征而难以区分。我们的目标是建立并验证一个机器学习模型,使用临床数据和计算机断层扫描特征来区分它们。这项多中心回顾性研究纳入了迄今为止诊断为鸟分枝杆菌胞内复合体和肺结核的169例患者。对数据进行分析,建立并验证了逻辑回归、随机森林和支持向量机模型。使用接收器工作特性和精确召回曲线对性能进行评估。总共分析了84例鸟分枝杆菌-细胞内复合体肺部疾病和85例肺结核。鸟分枝杆菌-细胞内复合体肺部疾病患者年龄较大。咯血率、腔数及形态、支气管扩张类型及分布有差异。支持向量机模型表现较好。在训练集中,曲线下面积为0.960,在验证集中,曲线下面积为0.885。结果表明,支持向量机模型的准确率较高,召回率较低。基于支持向量机器学习的模型整合了临床数据和计算机断层扫描成像特征,表现出出色的诊断性能,可以帮助区分鸟分枝杆菌-细胞内复杂肺部疾病和肺结核。
{"title":"Machine learning-based model assists in differentiating Mycobacterium avium Complex Pulmonary Disease from Pulmonary Tuberculosis: A Multicenter Study.","authors":"Jiacheng Zhang, Tingting Huang, Xu He, Dingsheng Han, Qian Xu, Fukun Shi, Lan Zhang, Dailun Hou","doi":"10.1007/s10278-025-01486-7","DOIUrl":"10.1007/s10278-025-01486-7","url":null,"abstract":"<p><p>The number of Mycobacterium avium-intracellulare complex pulmonary disease patients is increasing globally. Distinguishing Mycobacterium avium-intracellulare complex pulmonary disease from pulmonary tuberculosis is difficult due to similar manifestations and characteristics. We aimed to build and validate a machine learning model using clinical data and computed tomography features to differentiate them. This multi-centered, retrospective study included 169 patients diagnosed with Mycobacterium avium-intracellulare complex and pulmonary tuberculosis from date to date. Data were analyzed, and logistic regression, random forest, and support vector machine models were established and validated. Performance was evaluated using receiver operating characteristic and precision-recall curves. In total, 84 patients with Mycobacterium avium-intracellulare complex pulmonary disease and 85 with pulmonary tuberculosis were analyzed. Patients with Mycobacterium avium-intracellulare complex pulmonary disease were older. Hemoptysis rate, cavity number and morphology, bronchiectasis type, and distribution differed. The support vector machine model performed better. In the training set, the area under the curve was 0.960, and in the validation set it was 0.885. The precision-recall curve showed high accuracy and low recall for the support vector machine model. The support vector machine learning-based model, which integrates clinical data and computed tomography imaging features, exhibited excellent diagnostic performance and can assist in differentiating Mycobacterium avium-intracellulare complex pulmonary disease from pulmonary tuberculosis.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"59-70"},"PeriodicalIF":0.0,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12921107/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143766303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TMAN: A Triple Morphological Feature Attention Network for Fine-Grained Classification of Breast Ultrasound Images. 用于乳腺超声图像细粒度分类的三重形态学特征注意网络。
Pub Date : 2026-02-01 Epub Date: 2025-04-08 DOI: 10.1007/s10278-025-01496-5
Dongyue Wang, Min Xue, Hui Wang

Accurately diagnosing various types of breast lesions is critical for assessing breast cancer risk and predicting patient outcomes, which necessitates a fine-grained classification approach. While convolutional neural networks (CNNs) are predominantly employed in fine-grained classification tasks for breast lesions, they often struggle to effectively capture and model the intricate relationships between local and global features, an aspect that is vital for achieving high classification accuracy. Additionally, Color Doppler Flow Imaging (CDFI) and Strain Elastography (SE) are two important ultrasound imaging techniques widely used in the diagnosis of breast lesions. However, their specific contributions to fine-grained classification have not been thoroughly investigated. In this paper, we introduce a Triple Morphological Feature Attention Network (TMAN) designed to enhance fine-grained classification of breast ultrasound images. The TMAN architecture comprises three key modules: Local Margin Attention (LMA), Structured Texture Attention (STA), and Fusion Attention (FA), each focused on extracting distinct morphological features. TMAN achieved an average accuracy of 74.40%, precision of 73.18%, and specificity of 96.02%, surpassing state-of-the-art methods. The findings reveal that incorporating CDFI significantly improved classification for malignant subtypes with a 10% accuracy boost, while SE had a negligible impact. These findings highlight the effectiveness of TMAN in extracting nuanced morphological features and advancing precision in breast ultrasound diagnosis. The source code is accessible at https://github.com/windywindyw/TMAN .

准确诊断各种类型的乳腺病变对于评估乳腺癌风险和预测患者预后至关重要,这就需要一种细粒度的分类方法。虽然卷积神经网络(cnn)主要用于乳腺病变的细粒度分类任务,但它们往往难以有效地捕获和建模局部和全局特征之间的复杂关系,而这对于实现高分类精度至关重要。此外,彩色多普勒血流成像(CDFI)和应变弹性成像(SE)是两种重要的超声成像技术,广泛应用于乳腺病变的诊断。然而,它们对细粒度分类的具体贡献尚未得到彻底的研究。在本文中,我们引入了一个三重形态学特征注意网络(TMAN)来增强乳腺超声图像的细粒度分类。TMAN架构包括三个关键模块:局部边缘注意(LMA)、结构纹理注意(STA)和融合注意(FA),每个模块都专注于提取不同的形态特征。TMAN的平均准确率为74.40%,精密度为73.18%,特异度为96.02%,优于现有方法。研究结果显示,结合CDFI可显著改善恶性亚型的分类,准确率提高10%,而SE的影响可以忽略不计。这些发现突出了TMAN在提取细微形态学特征和提高乳腺超声诊断精度方面的有效性。源代码可从https://github.com/windywindyw/TMAN访问。
{"title":"TMAN: A Triple Morphological Feature Attention Network for Fine-Grained Classification of Breast Ultrasound Images.","authors":"Dongyue Wang, Min Xue, Hui Wang","doi":"10.1007/s10278-025-01496-5","DOIUrl":"10.1007/s10278-025-01496-5","url":null,"abstract":"<p><p>Accurately diagnosing various types of breast lesions is critical for assessing breast cancer risk and predicting patient outcomes, which necessitates a fine-grained classification approach. While convolutional neural networks (CNNs) are predominantly employed in fine-grained classification tasks for breast lesions, they often struggle to effectively capture and model the intricate relationships between local and global features, an aspect that is vital for achieving high classification accuracy. Additionally, Color Doppler Flow Imaging (CDFI) and Strain Elastography (SE) are two important ultrasound imaging techniques widely used in the diagnosis of breast lesions. However, their specific contributions to fine-grained classification have not been thoroughly investigated. In this paper, we introduce a Triple Morphological Feature Attention Network (TMAN) designed to enhance fine-grained classification of breast ultrasound images. The TMAN architecture comprises three key modules: Local Margin Attention (LMA), Structured Texture Attention (STA), and Fusion Attention (FA), each focused on extracting distinct morphological features. TMAN achieved an average accuracy of 74.40%, precision of 73.18%, and specificity of 96.02%, surpassing state-of-the-art methods. The findings reveal that incorporating CDFI significantly improved classification for malignant subtypes with a 10% accuracy boost, while SE had a negligible impact. These findings highlight the effectiveness of TMAN in extracting nuanced morphological features and advancing precision in breast ultrasound diagnosis. The source code is accessible at https://github.com/windywindyw/TMAN .</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"82-102"},"PeriodicalIF":0.0,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12921089/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143813390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SSL-DA: Semi-and Self-Supervised Learning with Dual Attention for Echocardiogram Segmentation. 超声心动图分割的双注意半监督和自监督学习。
Pub Date : 2026-02-01 Epub Date: 2025-05-12 DOI: 10.1007/s10278-025-01532-4
Lin Lv, Xing Han, Zhengxiang Sun, Zhaoguang Li, Xiuying Wang, Tong Jiang, Yiren Liu, Tianshu Li, Jingjing Xu, Liangzhen You, Guihua Yao, Feng-Rong Sun, Jianping Xing

Echocardiogram analysis plays a crucial role in assessing and diagnosing cardiac function, providing essential data to support medical diagnoses of heart disease. A key task, accurately identifying and segmenting the left ventricle (LV) in echocardiograms, remains challenging and labor-intensive. Current automated cardiac segmentation methods often lack the necessary accuracy and reproducibility, while semi-automated or manual annotations are excessively time-consuming. To address these limitations, we propose a novel segmentation framework, semi-and self-supervised learning with dual attention (SSL-DA) for echocardiogram segmentation. We start with a temporal masking network for pre-training. This network captures valuable information, such as echocardiogram periodicity. It also provides optimized initialization parameters for LV segmentation. We then employ a semi-supervised network to automatically segment the left ventricle, enhancing the model's learning with channel and spatial attention mechanisms to capture global channel dependencies and spatial dependencies across annotations. We evaluated SSL-DA on the publicly available EchoNet-Dynamic dataset, achieving a Dice similarity coefficient of 93.34% (95% CI, 93.23-93.46%), outperforming most prior CNN-based models. To further assess the generalization ability of SSL-DA, we conducted ablation experiments on the CAMUS dataset. Experimental results confirm that SSL-DA can quickly and accurately segment the left ventricle in echocardiograms, showing its potential for robust clinical application.

超声心动图分析在评估和诊断心功能方面起着至关重要的作用,为支持心脏病的医学诊断提供了必要的数据。在超声心动图中准确识别和分割左心室(LV)是一项关键任务,仍然具有挑战性和劳动强度。目前的自动化心脏分割方法往往缺乏必要的准确性和可重复性,而半自动或手动注释过于耗时。为了解决这些限制,我们提出了一种新的分割框架,半监督和自监督双注意学习(SSL-DA)用于超声心动图分割。我们从一个时间屏蔽网络开始进行预训练。该网络捕获有价值的信息,如超声心动图的周期性。为LV分割提供了优化的初始化参数。然后,我们采用半监督网络来自动分割左心室,通过通道和空间注意机制增强模型的学习,以捕获全局通道依赖关系和跨注释的空间依赖关系。我们在公开的EchoNet-Dynamic数据集上评估了SSL-DA,获得了93.34% (95% CI, 93.23-93.46%)的Dice相似系数,优于大多数先前基于cnn的模型。为了进一步评估SSL-DA的泛化能力,我们在CAMUS数据集上进行了烧蚀实验。实验结果证实,SSL-DA可以快速准确地在超声心动图上分割左心室,显示了其强大的临床应用潜力。
{"title":"SSL-DA: Semi-and Self-Supervised Learning with Dual Attention for Echocardiogram Segmentation.","authors":"Lin Lv, Xing Han, Zhengxiang Sun, Zhaoguang Li, Xiuying Wang, Tong Jiang, Yiren Liu, Tianshu Li, Jingjing Xu, Liangzhen You, Guihua Yao, Feng-Rong Sun, Jianping Xing","doi":"10.1007/s10278-025-01532-4","DOIUrl":"10.1007/s10278-025-01532-4","url":null,"abstract":"<p><p>Echocardiogram analysis plays a crucial role in assessing and diagnosing cardiac function, providing essential data to support medical diagnoses of heart disease. A key task, accurately identifying and segmenting the left ventricle (LV) in echocardiograms, remains challenging and labor-intensive. Current automated cardiac segmentation methods often lack the necessary accuracy and reproducibility, while semi-automated or manual annotations are excessively time-consuming. To address these limitations, we propose a novel segmentation framework, semi-and self-supervised learning with dual attention (SSL-DA) for echocardiogram segmentation. We start with a temporal masking network for pre-training. This network captures valuable information, such as echocardiogram periodicity. It also provides optimized initialization parameters for LV segmentation. We then employ a semi-supervised network to automatically segment the left ventricle, enhancing the model's learning with channel and spatial attention mechanisms to capture global channel dependencies and spatial dependencies across annotations. We evaluated SSL-DA on the publicly available EchoNet-Dynamic dataset, achieving a Dice similarity coefficient of 93.34% (95% CI, 93.23-93.46%), outperforming most prior CNN-based models. To further assess the generalization ability of SSL-DA, we conducted ablation experiments on the CAMUS dataset. Experimental results confirm that SSL-DA can quickly and accurately segment the left ventricle in echocardiograms, showing its potential for robust clinical application.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"948-961"},"PeriodicalIF":0.0,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12920964/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143997101","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Learning on Misaligned Dual-Energy Chest X-ray Images Using Paired Cycle-Consistent Generative Adversarial Networks. 基于配对循环一致生成对抗网络的错位双能量胸部x线图像深度学习。
Pub Date : 2026-02-01 Epub Date: 2025-05-05 DOI: 10.1007/s10278-025-01508-4
Yasuyuki Ueda, Misato Niu, Riko Shimazaki, Asumi Yamazaki, Masashi Seki, Takayuki Ishida

Dual-energy subtraction (DES) chest X-ray images (CXRs) are often affected by motion artifacts resulting from patients' voluntary or involuntary movements, even in clinical settings. Additionally, the mediastinum and upper abdominal regions in low-energy (LE) CXRs are susceptible to signal insufficiency due to inadequate input photon numbers. Current image processing techniques for removing motion artifacts and statistical noise from DES-CXRs are insufficient, and potential algorithms for these tasks remain largely unexplored. We propose a framework based on paired cycle-consistency adversarial generative networks to effectively remove motion artifacts and statistical noise from DES-CXRs. The proposed method incorporates ensemble discriminators, differentiable augmentation, anti-aliased convolution layers, and a basic 8-layer U-Net generator. This method was trained and tested using a clinical image dataset comprising data of 600 examinations of individuals who underwent dual-energy chest X-ray imaging for diagnostic purposes, using a sixfold cross-validation approach. It demonstrated a remarkable improvement in motion artifact suppression in terms of an analysis of full width at the 10-percent maximum improved from 0.216 ± 0.0720 to 0.200 ± 0.0783 for the left lung region of interests including the cardiac region. Furthermore, it outperformed the method in a previous study in terms of a peak signal-to-noise ratio of 50.7 ± 3.68, structural similarity index of 0.997 ± 0.0152 for LE images, and Fréchet inception distance of 85.0 ± 3.52 for bone-suppressed DES images. The proposed method significantly outperforms existing techniques for removing motion artifacts and statistical noise and shows strong potential for clinical applications in chest X-ray imaging.

双能量减影(DES)胸部x线图像(cxr)经常受到由患者自愿或非自愿运动引起的运动伪影的影响,即使在临床环境中也是如此。此外,由于输入光子数量不足,低能量(LE) cxr中的纵隔和上腹部区域容易受到信号不足的影响。目前用于从des - cxr中去除运动伪影和统计噪声的图像处理技术是不够的,并且用于这些任务的潜在算法在很大程度上仍未被探索。我们提出了一个基于成对循环一致性对抗生成网络的框架,以有效地去除des - cxr中的运动伪影和统计噪声。该方法结合了集成鉴别器、可微增强、抗混叠卷积层和一个基本的8层U-Net生成器。该方法使用临床图像数据集进行训练和测试,该数据集包括600例接受双能胸部x线成像诊断的个体的检查数据,使用六倍交叉验证方法。从10%最大值的全宽度分析来看,它显示了运动伪影抑制的显著改善,对于包括心脏区域在内的左肺区域,从0.216±0.0720改善到0.200±0.0783。此外,该方法的峰值信噪比为50.7±3.68,LE图像的结构相似指数为0.997±0.0152,骨抑制DES图像的fr起始距离为85.0±3.52,均优于前人研究方法。该方法明显优于现有的去除运动伪影和统计噪声的技术,在胸部x线成像的临床应用中显示出强大的潜力。
{"title":"Deep Learning on Misaligned Dual-Energy Chest X-ray Images Using Paired Cycle-Consistent Generative Adversarial Networks.","authors":"Yasuyuki Ueda, Misato Niu, Riko Shimazaki, Asumi Yamazaki, Masashi Seki, Takayuki Ishida","doi":"10.1007/s10278-025-01508-4","DOIUrl":"10.1007/s10278-025-01508-4","url":null,"abstract":"<p><p>Dual-energy subtraction (DES) chest X-ray images (CXRs) are often affected by motion artifacts resulting from patients' voluntary or involuntary movements, even in clinical settings. Additionally, the mediastinum and upper abdominal regions in low-energy (LE) CXRs are susceptible to signal insufficiency due to inadequate input photon numbers. Current image processing techniques for removing motion artifacts and statistical noise from DES-CXRs are insufficient, and potential algorithms for these tasks remain largely unexplored. We propose a framework based on paired cycle-consistency adversarial generative networks to effectively remove motion artifacts and statistical noise from DES-CXRs. The proposed method incorporates ensemble discriminators, differentiable augmentation, anti-aliased convolution layers, and a basic 8-layer U-Net generator. This method was trained and tested using a clinical image dataset comprising data of 600 examinations of individuals who underwent dual-energy chest X-ray imaging for diagnostic purposes, using a sixfold cross-validation approach. It demonstrated a remarkable improvement in motion artifact suppression in terms of an analysis of full width at the 10-percent maximum improved from 0.216 ± 0.0720 to 0.200 ± 0.0783 for the left lung region of interests including the cardiac region. Furthermore, it outperformed the method in a previous study in terms of a peak signal-to-noise ratio of 50.7 ± 3.68, structural similarity index of 0.997 ± 0.0152 for LE images, and Fréchet inception distance of 85.0 ± 3.52 for bone-suppressed DES images. The proposed method significantly outperforms existing techniques for removing motion artifacts and statistical noise and shows strong potential for clinical applications in chest X-ray imaging.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"827-841"},"PeriodicalIF":0.0,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12920848/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144033498","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimized Feature Selection and Deep Neural Networks to Improve Heart Disease Prediction. 优化特征选择和深度神经网络改善心脏病预测。
Pub Date : 2026-02-01 Epub Date: 2025-04-16 DOI: 10.1007/s10278-025-01435-4
Changming Tan, Zhaoshun Yuan, Feng Xu, Dang Xie

Heart disease remains a significant health threat due to its high mortality rate and increasing prevalence. Early prediction using basic physical markers from routine exams is crucial for timely diagnosis and intervention. However, manual analysis of large datasets can be labor-intensive and error-prone. Our goal is to rapidly and reliably anticipate cardiac disease using a variety of body signs. This research presents a unique model for heart disease prediction. We provide a system for predicting cardiac disease that blends the deep convolutional neural network with a feature selection technique based on the LinearSVC. This integrated feature selection method selects a subset of characteristics that are strongly linked with heart disease. We feed these features into the deep conventual neural network that we constructed. Also to improve the speed of the predictor and avoid gradient varnishing or explosion, the network's hyperparameters were tuned using the random search algorithm. The proposed method was evaluated using the UCI and MIT datasets. The predictor is evaluated using a number of indicators, such as accuracy, recall, precision, and F1 score. The results demonstrate that our model attains accuracy rates of 98.16%, 98.2%, 95.38%, and 97.84% in the UCI dataset, with an average MCC score of 90%. These results affirm the efficacy and reliability of the proposed technique to predict heart disease.

心脏病由于其高死亡率和日益增加的流行率,仍然是一个重大的健康威胁。利用常规检查的基本物理标记进行早期预测对于及时诊断和干预至关重要。然而,对大型数据集的人工分析可能是劳动密集型的,而且容易出错。我们的目标是利用各种身体体征快速可靠地预测心脏病。这项研究提出了一种独特的心脏病预测模型。我们提供了一个将深度卷积神经网络与基于线性svc的特征选择技术相结合的心脏病预测系统。这种综合特征选择方法选择了与心脏病密切相关的特征子集。我们将这些特征输入到我们构建的深度神经网络中。此外,为了提高预测器的速度并避免梯度清漆或爆炸,使用随机搜索算法对网络的超参数进行了调整。使用UCI和MIT数据集对所提出的方法进行了评估。预测器使用许多指标进行评估,例如准确性、召回率、精度和F1分数。结果表明,该模型在UCI数据集中的准确率分别为98.16%、98.2%、95.38%和97.84%,平均MCC得分为90%。这些结果肯定了该技术预测心脏病的有效性和可靠性。
{"title":"Optimized Feature Selection and Deep Neural Networks to Improve Heart Disease Prediction.","authors":"Changming Tan, Zhaoshun Yuan, Feng Xu, Dang Xie","doi":"10.1007/s10278-025-01435-4","DOIUrl":"10.1007/s10278-025-01435-4","url":null,"abstract":"<p><p>Heart disease remains a significant health threat due to its high mortality rate and increasing prevalence. Early prediction using basic physical markers from routine exams is crucial for timely diagnosis and intervention. However, manual analysis of large datasets can be labor-intensive and error-prone. Our goal is to rapidly and reliably anticipate cardiac disease using a variety of body signs. This research presents a unique model for heart disease prediction. We provide a system for predicting cardiac disease that blends the deep convolutional neural network with a feature selection technique based on the LinearSVC. This integrated feature selection method selects a subset of characteristics that are strongly linked with heart disease. We feed these features into the deep conventual neural network that we constructed. Also to improve the speed of the predictor and avoid gradient varnishing or explosion, the network's hyperparameters were tuned using the random search algorithm. The proposed method was evaluated using the UCI and MIT datasets. The predictor is evaluated using a number of indicators, such as accuracy, recall, precision, and F1 score. The results demonstrate that our model attains accuracy rates of 98.16%, 98.2%, 95.38%, and 97.84% in the UCI dataset, with an average MCC score of 90%. These results affirm the efficacy and reliability of the proposed technique to predict heart disease.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"908-925"},"PeriodicalIF":0.0,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12920954/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144065402","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving Ovarian Cancer Subtyping with Computer Vision Models on Tiled Histopathological Images. 利用平铺组织病理图像的计算机视觉模型改进卵巢癌亚型。
Pub Date : 2026-02-01 Epub Date: 2025-05-20 DOI: 10.1007/s10278-025-01546-y
Sterling Ramroach, Rikaard Hosein

Ovarian cancer remains one of the most challenging cancers to diagnose due to its non-specific symptoms, lack of reliable screening tests, and the complexity of detecting abnormalities. Accurate subtype classification is crucial for personalised treatment and improved patient outcomes. In this study, we developed a machine learning pipeline fine-tuning pre-trained computer vision models to classify ovarian cancer subtypes from whole slide images (WSI). Using targeted tissue masks for necrosis, stroma, and tumour regions as a proof of concept, we demonstrated the efficacy of tiling masked regions to transform a complex detection-then-classification problem into a simpler classification task. Our method achieved high accuracy in tile-level classification, with a subsequent extension to subtype classification via majority voting on tiled images. Precision exceeds 90% across subtypes, which highlights the potential of scalable, automated systems to assist in ovarian cancer diagnostics. These findings contribute to the broader field of computational pathology, paving the way for enhanced diagnostic consistency and accessibility in clinical settings.

由于其非特异性症状、缺乏可靠的筛查试验以及检测异常的复杂性,卵巢癌仍然是诊断最具挑战性的癌症之一。准确的亚型分类对于个性化治疗和改善患者预后至关重要。在这项研究中,我们开发了一种机器学习管道微调预训练的计算机视觉模型,用于从整个幻灯片图像(WSI)中分类卵巢癌亚型。使用坏死、间质和肿瘤区域的靶向组织掩膜作为概念验证,我们证明了平铺掩膜区域将复杂的检测-分类问题转化为更简单的分类任务的有效性。我们的方法在瓷砖级分类中获得了很高的准确性,随后通过对瓷砖图像的多数投票扩展到子类型分类。所有亚型的准确率超过90%,这突出了可扩展的自动化系统在卵巢癌诊断方面的潜力。这些发现有助于更广泛的计算病理学领域,为增强诊断一致性和临床设置的可及性铺平道路。
{"title":"Improving Ovarian Cancer Subtyping with Computer Vision Models on Tiled Histopathological Images.","authors":"Sterling Ramroach, Rikaard Hosein","doi":"10.1007/s10278-025-01546-y","DOIUrl":"10.1007/s10278-025-01546-y","url":null,"abstract":"<p><p>Ovarian cancer remains one of the most challenging cancers to diagnose due to its non-specific symptoms, lack of reliable screening tests, and the complexity of detecting abnormalities. Accurate subtype classification is crucial for personalised treatment and improved patient outcomes. In this study, we developed a machine learning pipeline fine-tuning pre-trained computer vision models to classify ovarian cancer subtypes from whole slide images (WSI). Using targeted tissue masks for necrosis, stroma, and tumour regions as a proof of concept, we demonstrated the efficacy of tiling masked regions to transform a complex detection-then-classification problem into a simpler classification task. Our method achieved high accuracy in tile-level classification, with a subsequent extension to subtype classification via majority voting on tiled images. Precision exceeds 90% across subtypes, which highlights the potential of scalable, automated systems to assist in ovarian cancer diagnostics. These findings contribute to the broader field of computational pathology, paving the way for enhanced diagnostic consistency and accessibility in clinical settings.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"620-626"},"PeriodicalIF":0.0,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12920868/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144113205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of imaging informatics in medicine
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1