首页 > 最新文献

Journal of imaging informatics in medicine最新文献

英文 中文
Radiomics to Differentiate Renal Oncocytoma from Clear Cell Renal Cell Carcinoma on Contrast-Enhanced CT: A Preliminary Study. 增强CT放射组学鉴别肾癌与透明细胞癌的初步研究。
Pub Date : 2026-02-02 DOI: 10.1007/s10278-026-01851-0
Fang Liu, Longwei Jia, Xiaoming Zhou, Lan Yu

This study assessed the value of radiomics analysis in differentiating clear cell renal cell carcinoma (ccRCC) from renal oncocytoma (RO) using multi-phase contrast-enhanced CT. A retrospective analysis included 43 ccRCC and 43 RO cases (2013-2024). Preoperative three-phase CT scans (corticomedullary [CP], nephrographic [NP], excretory [EP]) were analyzed. Tumor regions of interest (ROIs) were semi-automatically segmented in 3D-Slicer, with texture features extracted via IBEX software. Receiver operating characteristic (ROC) curves and area under the curve (AUC) values were calculated for selected parameters in each phase. A support vector machine (SVM) classifier trained on texture parameters underwent diagnostic evaluation via ROC analysis. All phases showed high diagnostic accuracy (AUC > 0.9), with NP demonstrating the highest performance (AUC = 0.952; accuracy, 0.88; sensitivity, 0.91; specificity, 0.87). Intensity histogram IH_Skewness differed significantly between ccRCC and RO in CP and NP (P < 0.01 for both), with AUC values of 0.75 (CP) and 0.79 (NP). Combining LASSO dimension reduction with SVM using multi-phase CT radiomics features enabled the effective differentiation between ccRCC and RO, highlighting texture analysis as a promising clinical tool.

本研究评估了放射组学分析在多期增强CT鉴别透明细胞肾细胞癌(ccRCC)和肾癌细胞瘤(RO)中的价值。回顾性分析43例ccRCC和43例RO(2013-2024)。分析术前三相CT扫描(皮质髓质[CP]、肾造影[NP]、排泄[EP])。在3D-Slicer中对肿瘤感兴趣区域(roi)进行半自动分割,并通过IBEX软件提取纹理特征。计算各阶段选定参数的受试者工作特征(ROC)曲线和曲线下面积(AUC)值。基于纹理参数训练的支持向量机分类器通过ROC分析进行诊断性评价。各阶段的诊断准确率均较高(AUC为0.9),其中NP的诊断准确率最高(AUC = 0.952,准确率0.88,敏感性0.91,特异性0.87)。ccRCC和RO在CP和NP的强度直方图IH_Skewness差异有统计学意义(P
{"title":"Radiomics to Differentiate Renal Oncocytoma from Clear Cell Renal Cell Carcinoma on Contrast-Enhanced CT: A Preliminary Study.","authors":"Fang Liu, Longwei Jia, Xiaoming Zhou, Lan Yu","doi":"10.1007/s10278-026-01851-0","DOIUrl":"https://doi.org/10.1007/s10278-026-01851-0","url":null,"abstract":"<p><p>This study assessed the value of radiomics analysis in differentiating clear cell renal cell carcinoma (ccRCC) from renal oncocytoma (RO) using multi-phase contrast-enhanced CT. A retrospective analysis included 43 ccRCC and 43 RO cases (2013-2024). Preoperative three-phase CT scans (corticomedullary [CP], nephrographic [NP], excretory [EP]) were analyzed. Tumor regions of interest (ROIs) were semi-automatically segmented in 3D-Slicer, with texture features extracted via IBEX software. Receiver operating characteristic (ROC) curves and area under the curve (AUC) values were calculated for selected parameters in each phase. A support vector machine (SVM) classifier trained on texture parameters underwent diagnostic evaluation via ROC analysis. All phases showed high diagnostic accuracy (AUC > 0.9), with NP demonstrating the highest performance (AUC = 0.952; accuracy, 0.88; sensitivity, 0.91; specificity, 0.87). Intensity histogram IH_Skewness differed significantly between ccRCC and RO in CP and NP (P < 0.01 for both), with AUC values of 0.75 (CP) and 0.79 (NP). Combining LASSO dimension reduction with SVM using multi-phase CT radiomics features enabled the effective differentiation between ccRCC and RO, highlighting texture analysis as a promising clinical tool.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146109443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Two-Stage Automatic Liver Classification System Based on Deep Learning Approach Using CT Images. 基于CT图像深度学习的两阶段肝脏自动分类系统。
Pub Date : 2026-02-01 Epub Date: 2025-05-12 DOI: 10.1007/s10278-025-01480-z
Rabiye Kılıç, Ahmet Yalçın, Fatih Alper, Emin Argun Oral, Ibrahim Yucel Ozbek

Alveolar echinococcosis (AE) is a parasitic disease caused by Echinococcus multilocularis, where early detection is crucial for effective treatment. This study introduces a novel method for the early diagnosis of liver diseases by differentiating between tumor, AE, and healthy cases using non-contrast CT images, which are widely accessible and eliminate the risks associated with contrast agents. The proposed approach integrates an automatic liver region detection method based on RCNN followed by a CNN-based classification framework. A dataset comprising over 27,000 thorax-abdominal images from 233 patients, including 8206 images with liver tissue, was constructed and used to evaluate the proposed method. The experimental results demonstrate the importance of the two-stage classification approach. In a 2-class classification problem for healthy and non-healthy classes, an accuracy rate of 0.936 (95% CI: 0.925 - 0.947) was obtained, and that for 3-class classification problem with AE, tumor, and healthy classes was obtained as 0.863 (95% CI: 0.847 - 0.879). These results highlight the potential use of the proposed framework as a fully automatic approach for liver classification without the use of contrast agents. Furthermore, the proposed framework demonstrates competitive performance compared to other state-of-the-art techniques, suggesting its applicability in clinical practice.

肺泡棘球蚴病(AE)是一种由多房棘球蚴引起的寄生虫病,早期发现对有效治疗至关重要。本研究引入了一种新的方法,通过使用非对比CT图像来区分肿瘤、AE和健康病例,从而早期诊断肝脏疾病,这种方法易于获得,并且消除了与对比剂相关的风险。该方法将基于RCNN的肝脏区域自动检测方法与基于cnn的分类框架相结合。构建了一个数据集,包括来自233名患者的27,000多张胸腹图像,其中包括8206张肝脏组织图像,并用于评估所提出的方法。实验结果表明了两阶段分类方法的重要性。对于健康和非健康类别的2类分类问题,准确率为0.936 (95% CI: 0.925 ~ 0.947);对于AE、肿瘤和健康类别的3类分类问题,准确率为0.863 (95% CI: 0.847 ~ 0.879)。这些结果强调了该框架作为一种无需使用造影剂的全自动肝脏分类方法的潜在用途。此外,与其他最先进的技术相比,所提出的框架显示出具有竞争力的性能,表明其在临床实践中的适用性。
{"title":"Two-Stage Automatic Liver Classification System Based on Deep Learning Approach Using CT Images.","authors":"Rabiye Kılıç, Ahmet Yalçın, Fatih Alper, Emin Argun Oral, Ibrahim Yucel Ozbek","doi":"10.1007/s10278-025-01480-z","DOIUrl":"10.1007/s10278-025-01480-z","url":null,"abstract":"<p><p>Alveolar echinococcosis (AE) is a parasitic disease caused by Echinococcus multilocularis, where early detection is crucial for effective treatment. This study introduces a novel method for the early diagnosis of liver diseases by differentiating between tumor, AE, and healthy cases using non-contrast CT images, which are widely accessible and eliminate the risks associated with contrast agents. The proposed approach integrates an automatic liver region detection method based on RCNN followed by a CNN-based classification framework. A dataset comprising over 27,000 thorax-abdominal images from 233 patients, including 8206 images with liver tissue, was constructed and used to evaluate the proposed method. The experimental results demonstrate the importance of the two-stage classification approach. In a 2-class classification problem for healthy and non-healthy classes, an accuracy rate of 0.936 (95% CI: 0.925 <math><mo>-</mo></math> 0.947) was obtained, and that for 3-class classification problem with AE, tumor, and healthy classes was obtained as 0.863 (95% CI: 0.847 <math><mo>-</mo></math> 0.879). These results highlight the potential use of the proposed framework as a fully automatic approach for liver classification without the use of contrast agents. Furthermore, the proposed framework demonstrates competitive performance compared to other state-of-the-art techniques, suggesting its applicability in clinical practice.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"311-333"},"PeriodicalIF":0.0,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144060951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Super-Resolution Deep Learning Reconstruction for T2*-Weighted Images: Improvement in Microbleed Lesion Detection and Image Quality. T2*加权图像的超分辨率深度学习重建:微出血病灶检测和图像质量的改进。
Pub Date : 2026-02-01 Epub Date: 2025-04-29 DOI: 10.1007/s10278-025-01522-6
Yusuke Asari, Koichiro Yasaka, Kazuki Endo, Jun Kanzawa, Naomasa Okimoto, Yusuke Watanabe, Yuichi Suzuki, Shiori Amemiya, Shigeru Kiryu, Osamu Abe

Super-resolution deep learning reconstruction (SR-DLR) is a promising tool for improving image quality by enhancing spatial resolution compared to conventional deep learning reconstruction (DLR). This study aimed to evaluate whether SR-DLR improves microbleed detection and visualization in brain magnetic resonance imaging (MRI) compared to DLR. This retrospective study included 69 patients (66.2 ± 13.8 years; 44 females) who underwent 3 T brain MRI with T2*-weighted 2D gradient echo and 3D flow-sensitive black blood imaging (reference standard) between June and August 2024. T2*-weighted images were reconstructed using SR-DLR and DLR. Three blinded readers detected microbleeds and assessed image quality, including microbleed and normal structure visibility, sharpness, noise, artifacts, and overall quality. Quantitative analysis involved measuring signal intensity along the septum pellucidum. Microbleed detection performance was analyzed using jackknife alternative free-response receiver operating characteristic analysis, while image quality was analyzed using the Wilcoxon signed-rank test and paired t-test. SR-DLR significantly outperformed DLR in microbleed detection (figure of merit: 0.690 vs. 0.645, p < 0.001). SR-DLR also demonstrated higher sensitivity for microbleed detection. Qualitative analysis showed better microbleed visualization for two readers (p < 0.001) and improved image sharpness for all readers (p ≤ 0.008). Quantitative analysis revealed enhanced sharpness, especially in full width at half maximum and edge rise slope (p < 0.001). SR-DLR improved image sharpness and quality, leading to better microbleed detection and visualization in brain MRI compared to DLR.

与传统的深度学习重建(DLR)相比,超分辨率深度学习重建(SR-DLR)是一种通过提高空间分辨率来改善图像质量的有前途的工具。本研究旨在评估与DLR相比,SR-DLR是否能改善脑磁共振成像(MRI)的微出血检测和可视化。本回顾性研究纳入69例患者(66.2±13.8岁;在2024年6月至8月期间接受了T2*加权2D梯度回波和3D血流敏感黑血成像(参考标准)的3t脑MRI。T2*加权图像采用SR-DLR和DLR重建。三名盲法读者检测微出血并评估图像质量,包括微出血和正常结构的可视性、清晰度、噪声、伪影和整体质量。定量分析包括沿透明隔测量信号强度。微出血检测性能分析采用折刀可选自由响应接收机工作特性分析,图像质量分析采用Wilcoxon符号秩检验和配对t检验。SR-DLR在微出血检测方面明显优于DLR(优势值:0.690 vs. 0.645, p
{"title":"Super-Resolution Deep Learning Reconstruction for T2*-Weighted Images: Improvement in Microbleed Lesion Detection and Image Quality.","authors":"Yusuke Asari, Koichiro Yasaka, Kazuki Endo, Jun Kanzawa, Naomasa Okimoto, Yusuke Watanabe, Yuichi Suzuki, Shiori Amemiya, Shigeru Kiryu, Osamu Abe","doi":"10.1007/s10278-025-01522-6","DOIUrl":"10.1007/s10278-025-01522-6","url":null,"abstract":"<p><p>Super-resolution deep learning reconstruction (SR-DLR) is a promising tool for improving image quality by enhancing spatial resolution compared to conventional deep learning reconstruction (DLR). This study aimed to evaluate whether SR-DLR improves microbleed detection and visualization in brain magnetic resonance imaging (MRI) compared to DLR. This retrospective study included 69 patients (66.2 ± 13.8 years; 44 females) who underwent 3 T brain MRI with T2*-weighted 2D gradient echo and 3D flow-sensitive black blood imaging (reference standard) between June and August 2024. T2*-weighted images were reconstructed using SR-DLR and DLR. Three blinded readers detected microbleeds and assessed image quality, including microbleed and normal structure visibility, sharpness, noise, artifacts, and overall quality. Quantitative analysis involved measuring signal intensity along the septum pellucidum. Microbleed detection performance was analyzed using jackknife alternative free-response receiver operating characteristic analysis, while image quality was analyzed using the Wilcoxon signed-rank test and paired t-test. SR-DLR significantly outperformed DLR in microbleed detection (figure of merit: 0.690 vs. 0.645, p < 0.001). SR-DLR also demonstrated higher sensitivity for microbleed detection. Qualitative analysis showed better microbleed visualization for two readers (p < 0.001) and improved image sharpness for all readers (p ≤ 0.008). Quantitative analysis revealed enhanced sharpness, especially in full width at half maximum and edge rise slope (p < 0.001). SR-DLR improved image sharpness and quality, leading to better microbleed detection and visualization in brain MRI compared to DLR.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"805-814"},"PeriodicalIF":0.0,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144059511","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Modifying the U-Net's Encoder-Decoder Architecture for Segmentation of Tumors in Breast Ultrasound Images. 乳腺超声图像中肿瘤分割U-Net编码器-解码器结构的改进。
Pub Date : 2026-02-01 Epub Date: 2025-05-15 DOI: 10.1007/s10278-025-01537-z
Sina Derakhshandeh, Ali Mahloojifar

Segmentation is one of the most significant steps in image processing. Segmenting an image is a technique that makes it possible to separate a digital image into various areas based on the different characteristics of pixels in the image. In particular, the segmentation of breast ultrasound images is widely used for cancer identification. As a result of image segmentation, it is possible to make early diagnoses of a diseases via medical images in a very effective way. Due to various ultrasound artifacts and noises, including speckle noise, low signal-to-noise ratio, and intensity heterogeneity, the process of accurately segmenting medical images, such as ultrasound images, is still a challenging task. In this paper, we present a new method to improve the accuracy and effectiveness of breast ultrasound image segmentation. More precisely, we propose a neural network (NN) based on U-Net and an encoder-decoder architecture. By taking U-Net as the basis, both the encoder and decoder parts are developed by combining U-Net with other deep neural networks (Res-Net and MultiResUNet) and introducing a new approach and block (Co-Block), which preserve as much as possible the low-level and the high-level features. The designed network is evaluated using the Breast Ultrasound Images (BUSI) Dataset. It consists of 780 images, and the images are categorized into three classes, which are normal, benign, and malignant. According to our extensive evaluations on a public breast ultrasound dataset, the designed network segments the breast lesions more accurately than other state-of-the-art deep learning methods. With only 8.88 M parameters, our network (CResU-Net) obtained 82.88%, 77.5%, 90.3%, and 98.4% in terms of Dice similarity coefficients (DSC), intersection over union (IoU), area under curve (AUC), and global accuracy (ACC), respectively, on the BUSI dataset.

分割是图像处理中最重要的步骤之一。分割图像是一种技术,它可以根据图像中像素的不同特征将数字图像分割成不同的区域。特别是乳房超声图像的分割被广泛应用于癌症的识别。通过图像分割,可以非常有效地对医学图像进行疾病的早期诊断。由于超声图像存在各种伪影和噪声,包括斑点噪声、低信噪比和强度非均匀性,因此准确分割医学图像(如超声图像)仍然是一项具有挑战性的任务。本文提出了一种新的方法来提高乳腺超声图像分割的准确性和有效性。更准确地说,我们提出了一个基于U-Net的神经网络(NN)和一个编码器-解码器架构。以U-Net为基础,将U-Net与其他深度神经网络(Res-Net和MultiResUNet)相结合,并引入一种新的方法和块(Co-Block),尽可能地保留了低级和高级特征,开发了编码器和解码器部分。设计的网络使用乳房超声图像(BUSI)数据集进行评估。它由780张图像组成,图像分为正常、良性和恶性三大类。根据我们对公共乳房超声数据集的广泛评估,设计的网络比其他最先进的深度学习方法更准确地分割乳房病变。在只有8.88 M个参数的情况下,我们的网络(CResU-Net)在BUSI数据集上的骰子相似系数(DSC)、交集超过联合(IoU)、曲线下面积(AUC)和全局精度(ACC)方面分别获得了82.88%、77.5%、90.3%和98.4%。
{"title":"Modifying the U-Net's Encoder-Decoder Architecture for Segmentation of Tumors in Breast Ultrasound Images.","authors":"Sina Derakhshandeh, Ali Mahloojifar","doi":"10.1007/s10278-025-01537-z","DOIUrl":"10.1007/s10278-025-01537-z","url":null,"abstract":"<p><p>Segmentation is one of the most significant steps in image processing. Segmenting an image is a technique that makes it possible to separate a digital image into various areas based on the different characteristics of pixels in the image. In particular, the segmentation of breast ultrasound images is widely used for cancer identification. As a result of image segmentation, it is possible to make early diagnoses of a diseases via medical images in a very effective way. Due to various ultrasound artifacts and noises, including speckle noise, low signal-to-noise ratio, and intensity heterogeneity, the process of accurately segmenting medical images, such as ultrasound images, is still a challenging task. In this paper, we present a new method to improve the accuracy and effectiveness of breast ultrasound image segmentation. More precisely, we propose a neural network (NN) based on U-Net and an encoder-decoder architecture. By taking U-Net as the basis, both the encoder and decoder parts are developed by combining U-Net with other deep neural networks (Res-Net and MultiResUNet) and introducing a new approach and block (Co-Block), which preserve as much as possible the low-level and the high-level features. The designed network is evaluated using the Breast Ultrasound Images (BUSI) Dataset. It consists of 780 images, and the images are categorized into three classes, which are normal, benign, and malignant. According to our extensive evaluations on a public breast ultrasound dataset, the designed network segments the breast lesions more accurately than other state-of-the-art deep learning methods. With only 8.88 M parameters, our network (CResU-Net) obtained 82.88%, 77.5%, 90.3%, and 98.4% in terms of Dice similarity coefficients (DSC), intersection over union (IoU), area under curve (AUC), and global accuracy (ACC), respectively, on the BUSI dataset.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"355-369"},"PeriodicalIF":0.0,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144083096","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fusion of Texture Features Applied to H. pylori Infection Classification from Histopathological Images. 纹理特征融合在组织病理图像幽门螺杆菌感染分类中的应用。
Pub Date : 2026-02-01 Epub Date: 2025-05-30 DOI: 10.1007/s10278-025-01562-y
André Ricardo Backes

Helicobacter pylori (H. pylori) is a globally prevalent pathogenic bacterium. It affects over 4 billion people worldwide and contributes to many gastric diseases such as gastritis, peptic ulcers, and cancer. Its diagnosis traditionally relies on histopathological analysis of endoscopic biopsies by trained pathologists. It is a labor-intensive and time-consuming process that risks overlooking small bacterial populations. Another limiting factor is the cost, which can vary from a few dozen to hundreds of dollars. In order to automate this process, our study evaluated the potential of various texture features for binary classification of 204 histopathological images (H. pylori-positive and H. pylori-negative cases). Texture is an important attribute and describes the appearance of a surface based on its composition and structure. In our study, we discarded the color information present in the samples and computed texture features from various methods, selected based on their performance, novelty, and ability to highlight different aspects of the image. We also investigated how the combination of these features, performed by the application of Particle Swarm Optimization (PSO) algorithm, impact on the performance of classification. Results demonstrated that well known texture analysis methods are still competitive in terms of performance, obtaining the highest accuracy (94.61%) and F1-score (94.47%), suggesting a robust balance between precision and recall, surpassing state-of-the-art techniques such as ResNet-101 by a margin of 4.41%.

幽门螺杆菌(Helicobacter pylori, H. pylori)是一种全球流行的致病菌。它影响着全球超过40亿人,并导致许多胃部疾病,如胃炎、消化性溃疡和癌症。其诊断传统上依赖于由训练有素的病理学家进行的内窥镜活检的组织病理学分析。这是一个劳动密集型和耗时的过程,有可能忽略小的细菌种群。另一个限制因素是成本,从几十美元到几百美元不等。为了使这一过程自动化,我们的研究评估了204张组织病理学图像(幽门螺杆菌阳性和幽门螺杆菌阴性病例)的各种纹理特征的潜力。纹理是一种重要的属性,它根据表面的组成和结构来描述表面的外观。在我们的研究中,我们丢弃了样本中存在的颜色信息,并从各种方法中计算纹理特征,这些方法是根据它们的性能、新颖性和突出图像不同方面的能力来选择的。我们还研究了这些特征的组合如何通过应用粒子群优化算法(PSO)对分类性能的影响。结果表明,已知的纹理分析方法在性能方面仍然具有竞争力,获得了最高的准确率(94.61%)和f1分数(94.47%),表明精度和召回率之间的稳健平衡,超过了最先进的技术,如ResNet-101,高出4.41%。
{"title":"Fusion of Texture Features Applied to H. pylori Infection Classification from Histopathological Images.","authors":"André Ricardo Backes","doi":"10.1007/s10278-025-01562-y","DOIUrl":"10.1007/s10278-025-01562-y","url":null,"abstract":"<p><p>Helicobacter pylori (H. pylori) is a globally prevalent pathogenic bacterium. It affects over 4 billion people worldwide and contributes to many gastric diseases such as gastritis, peptic ulcers, and cancer. Its diagnosis traditionally relies on histopathological analysis of endoscopic biopsies by trained pathologists. It is a labor-intensive and time-consuming process that risks overlooking small bacterial populations. Another limiting factor is the cost, which can vary from a few dozen to hundreds of dollars. In order to automate this process, our study evaluated the potential of various texture features for binary classification of 204 histopathological images (H. pylori-positive and H. pylori-negative cases). Texture is an important attribute and describes the appearance of a surface based on its composition and structure. In our study, we discarded the color information present in the samples and computed texture features from various methods, selected based on their performance, novelty, and ability to highlight different aspects of the image. We also investigated how the combination of these features, performed by the application of Particle Swarm Optimization (PSO) algorithm, impact on the performance of classification. Results demonstrated that well known texture analysis methods are still competitive in terms of performance, obtaining the highest accuracy (94.61%) and F1-score (94.47%), suggesting a robust balance between precision and recall, surpassing state-of-the-art techniques such as ResNet-101 by a margin of 4.41%.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"627-638"},"PeriodicalIF":0.0,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144188712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparison of Automatic Segmentation and Preprocessing Approaches for Dynamic Total-Body 3D Pet Images with Different Pet Tracers. 不同示踪剂对动态三维宠物全身图像的自动分割和预处理方法比较。
Pub Date : 2026-02-01 Epub Date: 2025-05-27 DOI: 10.1007/s10278-025-01540-4
Maria K Jaakkola, Marcela Xiomara Rivera Pineda, Rafael Díaz, Maria Rantala, Anna Jalo, Henri Kärpijoki, Teemu Saari, Teemu Maaniitty, Thomas Keller, Heli Louhi, Saara Wahlroos, Merja Haaparanta-Solin, Olof Solin, Jaakko Hentilä, Jatta S Helin, Tuuli A Nissinen, Olli Eskola, Johan Rajander, Juhani Knuuti, Kirsi A Virtanen, Jarna C Hannukainen, Francisco López-Picón, Riku Klén

Segmentation is a routine step in PET image analysis, and few automatic tools have been developed for it. However, excluding supervised methods with their own limitations, they are typically designed for older, small images and the implementations are no longer publicly available. Here, we test if different commonly used building blocks of the automatic methods work with large modern total-body PET images. Dynamic total-body images from five different datasets are used for evaluation purposes, and the tested algorithms cover wide range of different preprocessing approaches and unsupervised segmentation methods. The validation is done by comparing the obtained segments to manually drawn ones using Jaccard index, Dice score, precision, and recall as measures of match. Out of the 17 considered segmentation methods, only 6 were computationally usable and provided enough segments for the needs of this study. Among these six feasible methods, hierarchical clustering and HDBSCAN had systematically the lowest Jaccard indices with the manual segmentations, whereas both GMM and k-means had median Jaccards of 0.58 over different organ segments and data sets. GMM outperformed k-means in human data, but with rat images, the two methods had equally good performance k-means having slightly stronger precision and GMM recall. We conclude that most of the commonly used unsupervised segmentation methods are computationally infeasible with the modern PET images, classical clustering algorithms k-means and especially Gaussian mixture model being the most promising candidates for further method development. Even though preprocessing, particularly denoising, improved the results, small organs remained difficult to segment.

分割是PET图像分析的常规步骤,目前很少有自动分割的工具。然而,除了具有自身局限性的监督方法外,它们通常是为较旧的小图像设计的,并且其实现不再公开可用。在这里,我们测试了不同常用的自动方法的构建块是否适用于大型现代全身PET图像。本文采用五种不同数据集的动态全身图像进行评价,所测试的算法涵盖了各种不同的预处理方法和无监督分割方法。验证是通过将获得的片段与手工绘制的片段进行比较来完成的,使用Jaccard索引、Dice分数、精度和召回率作为匹配的度量。在考虑的17种分割方法中,只有6种在计算上是可用的,并且为本研究的需要提供了足够的分割。在6种可行的方法中,分层聚类和HDBSCAN在人工分割时的Jaccard指数最低,而GMM和k-means在不同器官段和数据集上的Jaccard中值均为0.58。GMM在人类数据中的表现优于k-means,但在大鼠图像中,两种方法的表现相同,k-means的精度和GMM的召回率略高。我们认为,对于现代PET图像,大多数常用的无监督分割方法在计算上是不可行的,经典的聚类算法k-means,特别是高斯混合模型是最有希望进一步发展的方法。尽管预处理,特别是去噪,改善了结果,但小器官仍然难以分割。
{"title":"Comparison of Automatic Segmentation and Preprocessing Approaches for Dynamic Total-Body 3D Pet Images with Different Pet Tracers.","authors":"Maria K Jaakkola, Marcela Xiomara Rivera Pineda, Rafael Díaz, Maria Rantala, Anna Jalo, Henri Kärpijoki, Teemu Saari, Teemu Maaniitty, Thomas Keller, Heli Louhi, Saara Wahlroos, Merja Haaparanta-Solin, Olof Solin, Jaakko Hentilä, Jatta S Helin, Tuuli A Nissinen, Olli Eskola, Johan Rajander, Juhani Knuuti, Kirsi A Virtanen, Jarna C Hannukainen, Francisco López-Picón, Riku Klén","doi":"10.1007/s10278-025-01540-4","DOIUrl":"10.1007/s10278-025-01540-4","url":null,"abstract":"<p><p>Segmentation is a routine step in PET image analysis, and few automatic tools have been developed for it. However, excluding supervised methods with their own limitations, they are typically designed for older, small images and the implementations are no longer publicly available. Here, we test if different commonly used building blocks of the automatic methods work with large modern total-body PET images. Dynamic total-body images from five different datasets are used for evaluation purposes, and the tested algorithms cover wide range of different preprocessing approaches and unsupervised segmentation methods. The validation is done by comparing the obtained segments to manually drawn ones using Jaccard index, Dice score, precision, and recall as measures of match. Out of the 17 considered segmentation methods, only 6 were computationally usable and provided enough segments for the needs of this study. Among these six feasible methods, hierarchical clustering and HDBSCAN had systematically the lowest Jaccard indices with the manual segmentations, whereas both GMM and k-means had median Jaccards of 0.58 over different organ segments and data sets. GMM outperformed k-means in human data, but with rat images, the two methods had equally good performance k-means having slightly stronger precision and GMM recall. We conclude that most of the commonly used unsupervised segmentation methods are computationally infeasible with the modern PET images, classical clustering algorithms k-means and especially Gaussian mixture model being the most promising candidates for further method development. Even though preprocessing, particularly denoising, improved the results, small organs remained difficult to segment.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"382-399"},"PeriodicalIF":0.0,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144164045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PlaNet-S: an Automatic Semantic Segmentation Model for Placenta Using U-Net and SegNeXt. PlaNet-S:基于U-Net和SegNeXt的胎盘自动语义分割模型。
Pub Date : 2026-02-01 Epub Date: 2025-05-27 DOI: 10.1007/s10278-025-01549-9
Isso Saito, Shinnosuke Yamamoto, Eichi Takaya, Ayaka Harigai, Tomomi Sato, Tomoya Kobayashi, Kei Takase, Takuya Ueda

This study aimed to develop a fully automated semantic placenta segmentation model that integrates the U-Net and SegNeXt architectures through ensemble learning. A total of 218 pregnant women with suspected placental abnormalities who underwent magnetic resonance imaging (MRI) were enrolled, yielding 1090 annotated images for developing a deep learning model for placental segmentation. The images were standardized and divided into training and test sets. The performance of Placental Segmentation Network (PlaNet-S), which integrates U-Net and SegNeXt within an ensemble framework, was assessed using Intersection over Union (IoU) and counting connected components (CCC) against the U-Net, U-Net + + , and DS-transUNet. PlaNet-S had significantly higher IoU (0.78, SD = 0.10) than that of U-Net (0.73, SD = 0.13) (p < 0.005) and DS-transUNet (0.64, SD = 0.16) (p < 0.005), while the difference with U-Net + + (0.77, SD = 0.12) was not statistically significant. The CCC for PlaNet-S was significantly higher than that for U-Net (p < 0.005), U-Net + + (p < 0.005), and DS-transUNet (p < 0.005), matching the ground truth in 86.0%, 56.7%, 67.9%, and 20.9% of the cases, respectively. PlaNet-S achieved higher IoU than U-Net and DS-transUNet, and comparable IoU to U-Net + + . Moreover, PlaNet-S significantly outperformed all three models in CCC, indicating better agreement with the ground truth. This model addresses the challenges of time-consuming physician-assisted manual segmentation and offers the potential for diverse applications in placental imaging analyses.

本研究旨在通过集成学习开发一个完全自动化的语义胎盘分割模型,该模型集成了U-Net和SegNeXt架构。共有218名疑似胎盘异常的孕妇接受了磁共振成像(MRI)检查,获得了1090张带注释的图像,用于开发胎盘分割的深度学习模型。将图像标准化并分为训练集和测试集。将U-Net和SegNeXt集成在一个集成框架内的胎盘分割网络(PlaNet-S)的性能使用交集超过联盟(IoU)和计数连接组件(CCC)对U-Net, U-Net + +和DS-transUNet进行了评估。PlaNet-S的IoU (0.78, SD = 0.10)显著高于U-Net (0.73, SD = 0.13)
{"title":"PlaNet-S: an Automatic Semantic Segmentation Model for Placenta Using U-Net and SegNeXt.","authors":"Isso Saito, Shinnosuke Yamamoto, Eichi Takaya, Ayaka Harigai, Tomomi Sato, Tomoya Kobayashi, Kei Takase, Takuya Ueda","doi":"10.1007/s10278-025-01549-9","DOIUrl":"10.1007/s10278-025-01549-9","url":null,"abstract":"<p><p>This study aimed to develop a fully automated semantic placenta segmentation model that integrates the U-Net and SegNeXt architectures through ensemble learning. A total of 218 pregnant women with suspected placental abnormalities who underwent magnetic resonance imaging (MRI) were enrolled, yielding 1090 annotated images for developing a deep learning model for placental segmentation. The images were standardized and divided into training and test sets. The performance of Placental Segmentation Network (PlaNet-S), which integrates U-Net and SegNeXt within an ensemble framework, was assessed using Intersection over Union (IoU) and counting connected components (CCC) against the U-Net, U-Net + + , and DS-transUNet. PlaNet-S had significantly higher IoU (0.78, SD = 0.10) than that of U-Net (0.73, SD = 0.13) (p < 0.005) and DS-transUNet (0.64, SD = 0.16) (p < 0.005), while the difference with U-Net + + (0.77, SD = 0.12) was not statistically significant. The CCC for PlaNet-S was significantly higher than that for U-Net (p < 0.005), U-Net + + (p < 0.005), and DS-transUNet (p < 0.005), matching the ground truth in 86.0%, 56.7%, 67.9%, and 20.9% of the cases, respectively. PlaNet-S achieved higher IoU than U-Net and DS-transUNet, and comparable IoU to U-Net + + . Moreover, PlaNet-S significantly outperformed all three models in CCC, indicating better agreement with the ground truth. This model addresses the challenges of time-consuming physician-assisted manual segmentation and offers the potential for diverse applications in placental imaging analyses.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"400-410"},"PeriodicalIF":0.0,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144164333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing Radiology Clinical Histories Through Transformer-Based Automated Clinical Note Summarization. 通过基于变压器的自动临床记录总结增强放射学临床病史。
Pub Date : 2026-02-01 Epub Date: 2025-04-07 DOI: 10.1007/s10278-025-01477-8
Niloufar Eghbali, Chad Klochko, Zaid Mahdi, Laith Alhiari, Jonathan Lee, Beatrice Knisely, Joseph Craig, Mohammad M Ghassemi

Insufficient clinical information provided in radiology requests, coupled with the cumbersome nature of electronic health records (EHRs), poses significant challenges for radiologists in extracting pertinent clinical data and compiling detailed radiology reports. Considering the challenges and time involved in navigating electronic medical records (EMR), an automated method to accurately compress the text while maintaining key semantic information could significantly enhance the efficiency of radiologists' workflow. The purpose of this study is to develop and demonstrate an automated tool for clinical note summarization with the goal of extracting the most pertinent clinical information for the radiological assessments. We adopted a transfer learning methodology from the natural language processing domain to fine-tune a transformer model for abstracting clinical reports. We employed a dataset consisting of 1000 clinical notes from 970 patients who underwent knee MRI, all manually summarized by radiologists. The fine-tuning process involved a two-stage approach starting with self-supervised denoising and then focusing on the summarization task. The model successfully condensed clinical notes by 97% while aligning closely with radiologist-written summaries evidenced by a 0.9 cosine similarity and a ROUGE-1 score of 40.18. In addition, statistical analysis, indicated by a Fleiss kappa score of 0.32, demonstrated fair agreement among specialists on the model's effectiveness in producing more relevant clinical histories compared to those included in the exam requests. The proposed model effectively summarized clinical notes for knee MRI studies, thereby demonstrating potential for improving radiology reporting efficiency and accuracy.

放射学请求中提供的临床信息不足,加上电子健康记录(EHRs)的繁琐性质,给放射科医生在提取相关临床数据和编制详细的放射学报告方面带来了重大挑战。考虑到电子病历(EMR)导航所涉及的挑战和时间,在保持关键语义信息的同时准确压缩文本的自动化方法可以显著提高放射科医生的工作效率。本研究的目的是开发和演示一种用于临床记录总结的自动化工具,目的是为放射评估提取最相关的临床信息。我们采用了自然语言处理领域的迁移学习方法来微调用于临床报告摘要的转换模型。我们使用了一个由970名接受膝关节MRI的患者的1000个临床记录组成的数据集,所有这些记录都是由放射科医生手工汇总的。微调过程包括两个阶段的方法,从自监督去噪开始,然后专注于摘要任务。该模型成功地将临床记录浓缩了97%,同时与放射科医生撰写的总结密切一致,证明了0.9余弦相似性和ROUGE-1评分为40.18。此外,Fleiss kappa评分为0.32的统计分析表明,与检查请求中包含的内容相比,专家们对该模型在产生更多相关临床病史方面的有效性达成了公平的共识。该模型有效地总结了膝关节MRI研究的临床记录,从而展示了提高放射学报告效率和准确性的潜力。
{"title":"Enhancing Radiology Clinical Histories Through Transformer-Based Automated Clinical Note Summarization.","authors":"Niloufar Eghbali, Chad Klochko, Zaid Mahdi, Laith Alhiari, Jonathan Lee, Beatrice Knisely, Joseph Craig, Mohammad M Ghassemi","doi":"10.1007/s10278-025-01477-8","DOIUrl":"10.1007/s10278-025-01477-8","url":null,"abstract":"<p><p>Insufficient clinical information provided in radiology requests, coupled with the cumbersome nature of electronic health records (EHRs), poses significant challenges for radiologists in extracting pertinent clinical data and compiling detailed radiology reports. Considering the challenges and time involved in navigating electronic medical records (EMR), an automated method to accurately compress the text while maintaining key semantic information could significantly enhance the efficiency of radiologists' workflow. The purpose of this study is to develop and demonstrate an automated tool for clinical note summarization with the goal of extracting the most pertinent clinical information for the radiological assessments. We adopted a transfer learning methodology from the natural language processing domain to fine-tune a transformer model for abstracting clinical reports. We employed a dataset consisting of 1000 clinical notes from 970 patients who underwent knee MRI, all manually summarized by radiologists. The fine-tuning process involved a two-stage approach starting with self-supervised denoising and then focusing on the summarization task. The model successfully condensed clinical notes by 97% while aligning closely with radiologist-written summaries evidenced by a 0.9 cosine similarity and a ROUGE-1 score of 40.18. In addition, statistical analysis, indicated by a Fleiss kappa score of 0.32, demonstrated fair agreement among specialists on the model's effectiveness in producing more relevant clinical histories compared to those included in the exam requests. The proposed model effectively summarized clinical notes for knee MRI studies, thereby demonstrating potential for improving radiology reporting efficiency and accuracy.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"1031-1039"},"PeriodicalIF":0.0,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143805120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing Breast Cancer Detection Through Optimized Thermal Image Analysis Using PRMS-Net Deep Learning Approach. 利用PRMS-Net深度学习方法优化热图像分析增强乳腺癌检测。
Pub Date : 2026-02-01 Epub Date: 2025-05-06 DOI: 10.1007/s10278-025-01465-y
Mudassir Khan, Mazliham Mohd Su'ud, Muhammad Mansoor Alam, Shaik Karimullah, Fahimuddin Shaik, Fazli Subhan

Breast cancer has remained one of the most frequent and life-threatening cancers in females globally, putting emphasis on better diagnostics in its early stages to solve the problem of therapy effectiveness and survival. This work enhances the assessment of breast cancer by employing progressive residual networks (PRN) and ResNet-50 within the framework of Progressive Residual Multi-Class Support Vector Machine-Net. Built on concepts of deep learning, this creative integration optimizes feature extraction and raises the bar for classification effectiveness, earning an almost perfect 99.63% on our tests. These findings indicate that PRMS-Net can serve as an efficient and reliable diagnostic tool for early breast cancer detection, aiding radiologists in improving diagnostic accuracy and reducing false positives. The separation of the data into different segments is possible to determine the architecture's reliability using the fivefold cross-validation approach. The total variability of precision, recall, and F1 scores clearly depicted in the box plot also endorse the competency of the model for marking proper sensitivity and specificity-highly required for combating false positive and false negative cases in real clinical practice. The evaluation of error distribution strengthens the model's rationale by giving validation of practical application in medical contexts of image processing. The high levels of feature extraction sensitivity together with highly sophisticated classification methods make PRMS-Net a powerful tool that can be used in improving the early detection of breast cancer and subsequent patient prognosis.

乳腺癌仍然是全球女性中最常见和危及生命的癌症之一,重点是在早期阶段更好地诊断,以解决治疗效果和生存问题。本研究通过在渐进式残差多类支持向量机网络框架内采用渐进式残差网络(PRN)和ResNet-50来增强乳腺癌的评估。基于深度学习的概念,这种创造性的集成优化了特征提取,提高了分类效率的标准,在我们的测试中获得了近乎完美的99.63%。这些发现表明,PRMS-Net可以作为早期乳腺癌检测的有效和可靠的诊断工具,帮助放射科医生提高诊断准确性和减少假阳性。使用五重交叉验证方法,将数据分离到不同的部分可以确定体系结构的可靠性。箱形图中清晰描述的精度、召回率和F1分数的总可变性也支持该模型在标记适当的敏感性和特异性方面的能力——这是在实际临床实践中打击假阳性和假阴性病例所高度需要的。误差分布的评估通过验证图像处理在医学环境中的实际应用,加强了模型的理论基础。高水平的特征提取灵敏度和高度复杂的分类方法使PRMS-Net成为一种强有力的工具,可用于改善乳腺癌的早期发现和随后的患者预后。
{"title":"Enhancing Breast Cancer Detection Through Optimized Thermal Image Analysis Using PRMS-Net Deep Learning Approach.","authors":"Mudassir Khan, Mazliham Mohd Su'ud, Muhammad Mansoor Alam, Shaik Karimullah, Fahimuddin Shaik, Fazli Subhan","doi":"10.1007/s10278-025-01465-y","DOIUrl":"10.1007/s10278-025-01465-y","url":null,"abstract":"<p><p>Breast cancer has remained one of the most frequent and life-threatening cancers in females globally, putting emphasis on better diagnostics in its early stages to solve the problem of therapy effectiveness and survival. This work enhances the assessment of breast cancer by employing progressive residual networks (PRN) and ResNet-50 within the framework of Progressive Residual Multi-Class Support Vector Machine-Net. Built on concepts of deep learning, this creative integration optimizes feature extraction and raises the bar for classification effectiveness, earning an almost perfect 99.63% on our tests. These findings indicate that PRMS-Net can serve as an efficient and reliable diagnostic tool for early breast cancer detection, aiding radiologists in improving diagnostic accuracy and reducing false positives. The separation of the data into different segments is possible to determine the architecture's reliability using the fivefold cross-validation approach. The total variability of precision, recall, and F1 scores clearly depicted in the box plot also endorse the competency of the model for marking proper sensitivity and specificity-highly required for combating false positive and false negative cases in real clinical practice. The evaluation of error distribution strengthens the model's rationale by giving validation of practical application in medical contexts of image processing. The high levels of feature extraction sensitivity together with highly sophisticated classification methods make PRMS-Net a powerful tool that can be used in improving the early detection of breast cancer and subsequent patient prognosis.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"864-883"},"PeriodicalIF":0.0,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144056751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Gaussian Function Model for Task-Specific Evaluation in Medical Imaging: A Theoretical Investigation. 高斯函数模型在医学影像任务特定评估:理论研究。
Pub Date : 2026-02-01 Epub Date: 2025-04-24 DOI: 10.1007/s10278-025-01511-9
Sho Maruyama

In medical image diagnosis, understanding image characteristics is crucial for selecting and optimizing imaging systems and advancing their development. Objective image quality assessments, based on specific diagnostic tasks, have become a standard in medical image analysis, bridging the gap between experimental observations and clinical applications. However, conventional task-based assessments often rely on ideal observer models that assume target signals have circular shapes with well-defined edges. This simplification rarely reflects the true complexity of lesion morphology, where edges exhibit variability. This study proposes a more practical approach by employing a Gaussian distribution to represent target signal shapes. This study explicitly derives the task function for Gaussian signals and evaluates the detectability index through simulations based on head computed tomography (CT) images with low-contrast lesions. Detectability indices were calculated for both circular and Gaussian signals using non-prewhitening and Hotelling observer models. The results demonstrate that Gaussian signals consistently exhibit lower detectability indices compared to circular signals, with differences becoming more pronounced for larger signal sizes. Simulated images closely resembling actual CT images confirm the validity of these calculations. These findings quantitatively clarify the influence of signal shape on detection performance, highlighting the limitations of conventional circular models. Thus, it provides a theoretical framework for task-based assessments in medical imaging, offering improved accuracy and clinical relevance for future advancements in the field.

在医学影像诊断中,了解影像特征对于选择和优化影像系统、推进影像系统的发展至关重要。基于特定诊断任务的客观图像质量评估已成为医学图像分析的标准,弥合了实验观察与临床应用之间的差距。然而,传统的基于任务的评估通常依赖于理想的观察者模型,该模型假设目标信号具有具有明确边缘的圆形。这种简化很少反映病变形态的真正复杂性,其中边缘表现出可变性。本研究提出了一种更实用的方法,即采用高斯分布来表示目标信号的形状。本研究明确推导了高斯信号的任务函数,并通过模拟低对比度病变的头部计算机断层扫描(CT)图像来评估可检测性指数。利用非预白化和Hotelling观测器模型计算了圆形和高斯信号的可探测性指数。结果表明,与圆形信号相比,高斯信号始终表现出较低的可检测性指数,当信号尺寸较大时,差异变得更加明显。模拟图像与实际CT图像非常相似,证实了这些计算的有效性。这些发现定量地阐明了信号形状对检测性能的影响,突出了传统圆形模型的局限性。因此,它为基于任务的医学成像评估提供了一个理论框架,为该领域的未来发展提供了更高的准确性和临床相关性。
{"title":"Gaussian Function Model for Task-Specific Evaluation in Medical Imaging: A Theoretical Investigation.","authors":"Sho Maruyama","doi":"10.1007/s10278-025-01511-9","DOIUrl":"10.1007/s10278-025-01511-9","url":null,"abstract":"<p><p>In medical image diagnosis, understanding image characteristics is crucial for selecting and optimizing imaging systems and advancing their development. Objective image quality assessments, based on specific diagnostic tasks, have become a standard in medical image analysis, bridging the gap between experimental observations and clinical applications. However, conventional task-based assessments often rely on ideal observer models that assume target signals have circular shapes with well-defined edges. This simplification rarely reflects the true complexity of lesion morphology, where edges exhibit variability. This study proposes a more practical approach by employing a Gaussian distribution to represent target signal shapes. This study explicitly derives the task function for Gaussian signals and evaluates the detectability index through simulations based on head computed tomography (CT) images with low-contrast lesions. Detectability indices were calculated for both circular and Gaussian signals using non-prewhitening and Hotelling observer models. The results demonstrate that Gaussian signals consistently exhibit lower detectability indices compared to circular signals, with differences becoming more pronounced for larger signal sizes. Simulated images closely resembling actual CT images confirm the validity of these calculations. These findings quantitatively clarify the influence of signal shape on detection performance, highlighting the limitations of conventional circular models. Thus, it provides a theoretical framework for task-based assessments in medical imaging, offering improved accuracy and clinical relevance for future advancements in the field.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"794-804"},"PeriodicalIF":0.0,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144047620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of imaging informatics in medicine
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1