Pub Date : 2026-02-02DOI: 10.1007/s10278-026-01851-0
Fang Liu, Longwei Jia, Xiaoming Zhou, Lan Yu
This study assessed the value of radiomics analysis in differentiating clear cell renal cell carcinoma (ccRCC) from renal oncocytoma (RO) using multi-phase contrast-enhanced CT. A retrospective analysis included 43 ccRCC and 43 RO cases (2013-2024). Preoperative three-phase CT scans (corticomedullary [CP], nephrographic [NP], excretory [EP]) were analyzed. Tumor regions of interest (ROIs) were semi-automatically segmented in 3D-Slicer, with texture features extracted via IBEX software. Receiver operating characteristic (ROC) curves and area under the curve (AUC) values were calculated for selected parameters in each phase. A support vector machine (SVM) classifier trained on texture parameters underwent diagnostic evaluation via ROC analysis. All phases showed high diagnostic accuracy (AUC > 0.9), with NP demonstrating the highest performance (AUC = 0.952; accuracy, 0.88; sensitivity, 0.91; specificity, 0.87). Intensity histogram IH_Skewness differed significantly between ccRCC and RO in CP and NP (P < 0.01 for both), with AUC values of 0.75 (CP) and 0.79 (NP). Combining LASSO dimension reduction with SVM using multi-phase CT radiomics features enabled the effective differentiation between ccRCC and RO, highlighting texture analysis as a promising clinical tool.
{"title":"Radiomics to Differentiate Renal Oncocytoma from Clear Cell Renal Cell Carcinoma on Contrast-Enhanced CT: A Preliminary Study.","authors":"Fang Liu, Longwei Jia, Xiaoming Zhou, Lan Yu","doi":"10.1007/s10278-026-01851-0","DOIUrl":"https://doi.org/10.1007/s10278-026-01851-0","url":null,"abstract":"<p><p>This study assessed the value of radiomics analysis in differentiating clear cell renal cell carcinoma (ccRCC) from renal oncocytoma (RO) using multi-phase contrast-enhanced CT. A retrospective analysis included 43 ccRCC and 43 RO cases (2013-2024). Preoperative three-phase CT scans (corticomedullary [CP], nephrographic [NP], excretory [EP]) were analyzed. Tumor regions of interest (ROIs) were semi-automatically segmented in 3D-Slicer, with texture features extracted via IBEX software. Receiver operating characteristic (ROC) curves and area under the curve (AUC) values were calculated for selected parameters in each phase. A support vector machine (SVM) classifier trained on texture parameters underwent diagnostic evaluation via ROC analysis. All phases showed high diagnostic accuracy (AUC > 0.9), with NP demonstrating the highest performance (AUC = 0.952; accuracy, 0.88; sensitivity, 0.91; specificity, 0.87). Intensity histogram IH_Skewness differed significantly between ccRCC and RO in CP and NP (P < 0.01 for both), with AUC values of 0.75 (CP) and 0.79 (NP). Combining LASSO dimension reduction with SVM using multi-phase CT radiomics features enabled the effective differentiation between ccRCC and RO, highlighting texture analysis as a promising clinical tool.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2026-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146109443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01Epub Date: 2025-05-12DOI: 10.1007/s10278-025-01480-z
Rabiye Kılıç, Ahmet Yalçın, Fatih Alper, Emin Argun Oral, Ibrahim Yucel Ozbek
Alveolar echinococcosis (AE) is a parasitic disease caused by Echinococcus multilocularis, where early detection is crucial for effective treatment. This study introduces a novel method for the early diagnosis of liver diseases by differentiating between tumor, AE, and healthy cases using non-contrast CT images, which are widely accessible and eliminate the risks associated with contrast agents. The proposed approach integrates an automatic liver region detection method based on RCNN followed by a CNN-based classification framework. A dataset comprising over 27,000 thorax-abdominal images from 233 patients, including 8206 images with liver tissue, was constructed and used to evaluate the proposed method. The experimental results demonstrate the importance of the two-stage classification approach. In a 2-class classification problem for healthy and non-healthy classes, an accuracy rate of 0.936 (95% CI: 0.925 0.947) was obtained, and that for 3-class classification problem with AE, tumor, and healthy classes was obtained as 0.863 (95% CI: 0.847 0.879). These results highlight the potential use of the proposed framework as a fully automatic approach for liver classification without the use of contrast agents. Furthermore, the proposed framework demonstrates competitive performance compared to other state-of-the-art techniques, suggesting its applicability in clinical practice.
{"title":"Two-Stage Automatic Liver Classification System Based on Deep Learning Approach Using CT Images.","authors":"Rabiye Kılıç, Ahmet Yalçın, Fatih Alper, Emin Argun Oral, Ibrahim Yucel Ozbek","doi":"10.1007/s10278-025-01480-z","DOIUrl":"10.1007/s10278-025-01480-z","url":null,"abstract":"<p><p>Alveolar echinococcosis (AE) is a parasitic disease caused by Echinococcus multilocularis, where early detection is crucial for effective treatment. This study introduces a novel method for the early diagnosis of liver diseases by differentiating between tumor, AE, and healthy cases using non-contrast CT images, which are widely accessible and eliminate the risks associated with contrast agents. The proposed approach integrates an automatic liver region detection method based on RCNN followed by a CNN-based classification framework. A dataset comprising over 27,000 thorax-abdominal images from 233 patients, including 8206 images with liver tissue, was constructed and used to evaluate the proposed method. The experimental results demonstrate the importance of the two-stage classification approach. In a 2-class classification problem for healthy and non-healthy classes, an accuracy rate of 0.936 (95% CI: 0.925 <math><mo>-</mo></math> 0.947) was obtained, and that for 3-class classification problem with AE, tumor, and healthy classes was obtained as 0.863 (95% CI: 0.847 <math><mo>-</mo></math> 0.879). These results highlight the potential use of the proposed framework as a fully automatic approach for liver classification without the use of contrast agents. Furthermore, the proposed framework demonstrates competitive performance compared to other state-of-the-art techniques, suggesting its applicability in clinical practice.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"311-333"},"PeriodicalIF":0.0,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144060951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Super-resolution deep learning reconstruction (SR-DLR) is a promising tool for improving image quality by enhancing spatial resolution compared to conventional deep learning reconstruction (DLR). This study aimed to evaluate whether SR-DLR improves microbleed detection and visualization in brain magnetic resonance imaging (MRI) compared to DLR. This retrospective study included 69 patients (66.2 ± 13.8 years; 44 females) who underwent 3 T brain MRI with T2*-weighted 2D gradient echo and 3D flow-sensitive black blood imaging (reference standard) between June and August 2024. T2*-weighted images were reconstructed using SR-DLR and DLR. Three blinded readers detected microbleeds and assessed image quality, including microbleed and normal structure visibility, sharpness, noise, artifacts, and overall quality. Quantitative analysis involved measuring signal intensity along the septum pellucidum. Microbleed detection performance was analyzed using jackknife alternative free-response receiver operating characteristic analysis, while image quality was analyzed using the Wilcoxon signed-rank test and paired t-test. SR-DLR significantly outperformed DLR in microbleed detection (figure of merit: 0.690 vs. 0.645, p < 0.001). SR-DLR also demonstrated higher sensitivity for microbleed detection. Qualitative analysis showed better microbleed visualization for two readers (p < 0.001) and improved image sharpness for all readers (p ≤ 0.008). Quantitative analysis revealed enhanced sharpness, especially in full width at half maximum and edge rise slope (p < 0.001). SR-DLR improved image sharpness and quality, leading to better microbleed detection and visualization in brain MRI compared to DLR.
与传统的深度学习重建(DLR)相比,超分辨率深度学习重建(SR-DLR)是一种通过提高空间分辨率来改善图像质量的有前途的工具。本研究旨在评估与DLR相比,SR-DLR是否能改善脑磁共振成像(MRI)的微出血检测和可视化。本回顾性研究纳入69例患者(66.2±13.8岁;在2024年6月至8月期间接受了T2*加权2D梯度回波和3D血流敏感黑血成像(参考标准)的3t脑MRI。T2*加权图像采用SR-DLR和DLR重建。三名盲法读者检测微出血并评估图像质量,包括微出血和正常结构的可视性、清晰度、噪声、伪影和整体质量。定量分析包括沿透明隔测量信号强度。微出血检测性能分析采用折刀可选自由响应接收机工作特性分析,图像质量分析采用Wilcoxon符号秩检验和配对t检验。SR-DLR在微出血检测方面明显优于DLR(优势值:0.690 vs. 0.645, p
{"title":"Super-Resolution Deep Learning Reconstruction for T2*-Weighted Images: Improvement in Microbleed Lesion Detection and Image Quality.","authors":"Yusuke Asari, Koichiro Yasaka, Kazuki Endo, Jun Kanzawa, Naomasa Okimoto, Yusuke Watanabe, Yuichi Suzuki, Shiori Amemiya, Shigeru Kiryu, Osamu Abe","doi":"10.1007/s10278-025-01522-6","DOIUrl":"10.1007/s10278-025-01522-6","url":null,"abstract":"<p><p>Super-resolution deep learning reconstruction (SR-DLR) is a promising tool for improving image quality by enhancing spatial resolution compared to conventional deep learning reconstruction (DLR). This study aimed to evaluate whether SR-DLR improves microbleed detection and visualization in brain magnetic resonance imaging (MRI) compared to DLR. This retrospective study included 69 patients (66.2 ± 13.8 years; 44 females) who underwent 3 T brain MRI with T2*-weighted 2D gradient echo and 3D flow-sensitive black blood imaging (reference standard) between June and August 2024. T2*-weighted images were reconstructed using SR-DLR and DLR. Three blinded readers detected microbleeds and assessed image quality, including microbleed and normal structure visibility, sharpness, noise, artifacts, and overall quality. Quantitative analysis involved measuring signal intensity along the septum pellucidum. Microbleed detection performance was analyzed using jackknife alternative free-response receiver operating characteristic analysis, while image quality was analyzed using the Wilcoxon signed-rank test and paired t-test. SR-DLR significantly outperformed DLR in microbleed detection (figure of merit: 0.690 vs. 0.645, p < 0.001). SR-DLR also demonstrated higher sensitivity for microbleed detection. Qualitative analysis showed better microbleed visualization for two readers (p < 0.001) and improved image sharpness for all readers (p ≤ 0.008). Quantitative analysis revealed enhanced sharpness, especially in full width at half maximum and edge rise slope (p < 0.001). SR-DLR improved image sharpness and quality, leading to better microbleed detection and visualization in brain MRI compared to DLR.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"805-814"},"PeriodicalIF":0.0,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144059511","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01Epub Date: 2025-05-15DOI: 10.1007/s10278-025-01537-z
Sina Derakhshandeh, Ali Mahloojifar
Segmentation is one of the most significant steps in image processing. Segmenting an image is a technique that makes it possible to separate a digital image into various areas based on the different characteristics of pixels in the image. In particular, the segmentation of breast ultrasound images is widely used for cancer identification. As a result of image segmentation, it is possible to make early diagnoses of a diseases via medical images in a very effective way. Due to various ultrasound artifacts and noises, including speckle noise, low signal-to-noise ratio, and intensity heterogeneity, the process of accurately segmenting medical images, such as ultrasound images, is still a challenging task. In this paper, we present a new method to improve the accuracy and effectiveness of breast ultrasound image segmentation. More precisely, we propose a neural network (NN) based on U-Net and an encoder-decoder architecture. By taking U-Net as the basis, both the encoder and decoder parts are developed by combining U-Net with other deep neural networks (Res-Net and MultiResUNet) and introducing a new approach and block (Co-Block), which preserve as much as possible the low-level and the high-level features. The designed network is evaluated using the Breast Ultrasound Images (BUSI) Dataset. It consists of 780 images, and the images are categorized into three classes, which are normal, benign, and malignant. According to our extensive evaluations on a public breast ultrasound dataset, the designed network segments the breast lesions more accurately than other state-of-the-art deep learning methods. With only 8.88 M parameters, our network (CResU-Net) obtained 82.88%, 77.5%, 90.3%, and 98.4% in terms of Dice similarity coefficients (DSC), intersection over union (IoU), area under curve (AUC), and global accuracy (ACC), respectively, on the BUSI dataset.
{"title":"Modifying the U-Net's Encoder-Decoder Architecture for Segmentation of Tumors in Breast Ultrasound Images.","authors":"Sina Derakhshandeh, Ali Mahloojifar","doi":"10.1007/s10278-025-01537-z","DOIUrl":"10.1007/s10278-025-01537-z","url":null,"abstract":"<p><p>Segmentation is one of the most significant steps in image processing. Segmenting an image is a technique that makes it possible to separate a digital image into various areas based on the different characteristics of pixels in the image. In particular, the segmentation of breast ultrasound images is widely used for cancer identification. As a result of image segmentation, it is possible to make early diagnoses of a diseases via medical images in a very effective way. Due to various ultrasound artifacts and noises, including speckle noise, low signal-to-noise ratio, and intensity heterogeneity, the process of accurately segmenting medical images, such as ultrasound images, is still a challenging task. In this paper, we present a new method to improve the accuracy and effectiveness of breast ultrasound image segmentation. More precisely, we propose a neural network (NN) based on U-Net and an encoder-decoder architecture. By taking U-Net as the basis, both the encoder and decoder parts are developed by combining U-Net with other deep neural networks (Res-Net and MultiResUNet) and introducing a new approach and block (Co-Block), which preserve as much as possible the low-level and the high-level features. The designed network is evaluated using the Breast Ultrasound Images (BUSI) Dataset. It consists of 780 images, and the images are categorized into three classes, which are normal, benign, and malignant. According to our extensive evaluations on a public breast ultrasound dataset, the designed network segments the breast lesions more accurately than other state-of-the-art deep learning methods. With only 8.88 M parameters, our network (CResU-Net) obtained 82.88%, 77.5%, 90.3%, and 98.4% in terms of Dice similarity coefficients (DSC), intersection over union (IoU), area under curve (AUC), and global accuracy (ACC), respectively, on the BUSI dataset.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"355-369"},"PeriodicalIF":0.0,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144083096","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01Epub Date: 2025-05-30DOI: 10.1007/s10278-025-01562-y
André Ricardo Backes
Helicobacter pylori (H. pylori) is a globally prevalent pathogenic bacterium. It affects over 4 billion people worldwide and contributes to many gastric diseases such as gastritis, peptic ulcers, and cancer. Its diagnosis traditionally relies on histopathological analysis of endoscopic biopsies by trained pathologists. It is a labor-intensive and time-consuming process that risks overlooking small bacterial populations. Another limiting factor is the cost, which can vary from a few dozen to hundreds of dollars. In order to automate this process, our study evaluated the potential of various texture features for binary classification of 204 histopathological images (H. pylori-positive and H. pylori-negative cases). Texture is an important attribute and describes the appearance of a surface based on its composition and structure. In our study, we discarded the color information present in the samples and computed texture features from various methods, selected based on their performance, novelty, and ability to highlight different aspects of the image. We also investigated how the combination of these features, performed by the application of Particle Swarm Optimization (PSO) algorithm, impact on the performance of classification. Results demonstrated that well known texture analysis methods are still competitive in terms of performance, obtaining the highest accuracy (94.61%) and F1-score (94.47%), suggesting a robust balance between precision and recall, surpassing state-of-the-art techniques such as ResNet-101 by a margin of 4.41%.
幽门螺杆菌(Helicobacter pylori, H. pylori)是一种全球流行的致病菌。它影响着全球超过40亿人,并导致许多胃部疾病,如胃炎、消化性溃疡和癌症。其诊断传统上依赖于由训练有素的病理学家进行的内窥镜活检的组织病理学分析。这是一个劳动密集型和耗时的过程,有可能忽略小的细菌种群。另一个限制因素是成本,从几十美元到几百美元不等。为了使这一过程自动化,我们的研究评估了204张组织病理学图像(幽门螺杆菌阳性和幽门螺杆菌阴性病例)的各种纹理特征的潜力。纹理是一种重要的属性,它根据表面的组成和结构来描述表面的外观。在我们的研究中,我们丢弃了样本中存在的颜色信息,并从各种方法中计算纹理特征,这些方法是根据它们的性能、新颖性和突出图像不同方面的能力来选择的。我们还研究了这些特征的组合如何通过应用粒子群优化算法(PSO)对分类性能的影响。结果表明,已知的纹理分析方法在性能方面仍然具有竞争力,获得了最高的准确率(94.61%)和f1分数(94.47%),表明精度和召回率之间的稳健平衡,超过了最先进的技术,如ResNet-101,高出4.41%。
{"title":"Fusion of Texture Features Applied to H. pylori Infection Classification from Histopathological Images.","authors":"André Ricardo Backes","doi":"10.1007/s10278-025-01562-y","DOIUrl":"10.1007/s10278-025-01562-y","url":null,"abstract":"<p><p>Helicobacter pylori (H. pylori) is a globally prevalent pathogenic bacterium. It affects over 4 billion people worldwide and contributes to many gastric diseases such as gastritis, peptic ulcers, and cancer. Its diagnosis traditionally relies on histopathological analysis of endoscopic biopsies by trained pathologists. It is a labor-intensive and time-consuming process that risks overlooking small bacterial populations. Another limiting factor is the cost, which can vary from a few dozen to hundreds of dollars. In order to automate this process, our study evaluated the potential of various texture features for binary classification of 204 histopathological images (H. pylori-positive and H. pylori-negative cases). Texture is an important attribute and describes the appearance of a surface based on its composition and structure. In our study, we discarded the color information present in the samples and computed texture features from various methods, selected based on their performance, novelty, and ability to highlight different aspects of the image. We also investigated how the combination of these features, performed by the application of Particle Swarm Optimization (PSO) algorithm, impact on the performance of classification. Results demonstrated that well known texture analysis methods are still competitive in terms of performance, obtaining the highest accuracy (94.61%) and F1-score (94.47%), suggesting a robust balance between precision and recall, surpassing state-of-the-art techniques such as ResNet-101 by a margin of 4.41%.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"627-638"},"PeriodicalIF":0.0,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144188712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01Epub Date: 2025-05-27DOI: 10.1007/s10278-025-01540-4
Maria K Jaakkola, Marcela Xiomara Rivera Pineda, Rafael Díaz, Maria Rantala, Anna Jalo, Henri Kärpijoki, Teemu Saari, Teemu Maaniitty, Thomas Keller, Heli Louhi, Saara Wahlroos, Merja Haaparanta-Solin, Olof Solin, Jaakko Hentilä, Jatta S Helin, Tuuli A Nissinen, Olli Eskola, Johan Rajander, Juhani Knuuti, Kirsi A Virtanen, Jarna C Hannukainen, Francisco López-Picón, Riku Klén
Segmentation is a routine step in PET image analysis, and few automatic tools have been developed for it. However, excluding supervised methods with their own limitations, they are typically designed for older, small images and the implementations are no longer publicly available. Here, we test if different commonly used building blocks of the automatic methods work with large modern total-body PET images. Dynamic total-body images from five different datasets are used for evaluation purposes, and the tested algorithms cover wide range of different preprocessing approaches and unsupervised segmentation methods. The validation is done by comparing the obtained segments to manually drawn ones using Jaccard index, Dice score, precision, and recall as measures of match. Out of the 17 considered segmentation methods, only 6 were computationally usable and provided enough segments for the needs of this study. Among these six feasible methods, hierarchical clustering and HDBSCAN had systematically the lowest Jaccard indices with the manual segmentations, whereas both GMM and k-means had median Jaccards of 0.58 over different organ segments and data sets. GMM outperformed k-means in human data, but with rat images, the two methods had equally good performance k-means having slightly stronger precision and GMM recall. We conclude that most of the commonly used unsupervised segmentation methods are computationally infeasible with the modern PET images, classical clustering algorithms k-means and especially Gaussian mixture model being the most promising candidates for further method development. Even though preprocessing, particularly denoising, improved the results, small organs remained difficult to segment.
{"title":"Comparison of Automatic Segmentation and Preprocessing Approaches for Dynamic Total-Body 3D Pet Images with Different Pet Tracers.","authors":"Maria K Jaakkola, Marcela Xiomara Rivera Pineda, Rafael Díaz, Maria Rantala, Anna Jalo, Henri Kärpijoki, Teemu Saari, Teemu Maaniitty, Thomas Keller, Heli Louhi, Saara Wahlroos, Merja Haaparanta-Solin, Olof Solin, Jaakko Hentilä, Jatta S Helin, Tuuli A Nissinen, Olli Eskola, Johan Rajander, Juhani Knuuti, Kirsi A Virtanen, Jarna C Hannukainen, Francisco López-Picón, Riku Klén","doi":"10.1007/s10278-025-01540-4","DOIUrl":"10.1007/s10278-025-01540-4","url":null,"abstract":"<p><p>Segmentation is a routine step in PET image analysis, and few automatic tools have been developed for it. However, excluding supervised methods with their own limitations, they are typically designed for older, small images and the implementations are no longer publicly available. Here, we test if different commonly used building blocks of the automatic methods work with large modern total-body PET images. Dynamic total-body images from five different datasets are used for evaluation purposes, and the tested algorithms cover wide range of different preprocessing approaches and unsupervised segmentation methods. The validation is done by comparing the obtained segments to manually drawn ones using Jaccard index, Dice score, precision, and recall as measures of match. Out of the 17 considered segmentation methods, only 6 were computationally usable and provided enough segments for the needs of this study. Among these six feasible methods, hierarchical clustering and HDBSCAN had systematically the lowest Jaccard indices with the manual segmentations, whereas both GMM and k-means had median Jaccards of 0.58 over different organ segments and data sets. GMM outperformed k-means in human data, but with rat images, the two methods had equally good performance k-means having slightly stronger precision and GMM recall. We conclude that most of the commonly used unsupervised segmentation methods are computationally infeasible with the modern PET images, classical clustering algorithms k-means and especially Gaussian mixture model being the most promising candidates for further method development. Even though preprocessing, particularly denoising, improved the results, small organs remained difficult to segment.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"382-399"},"PeriodicalIF":0.0,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144164045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study aimed to develop a fully automated semantic placenta segmentation model that integrates the U-Net and SegNeXt architectures through ensemble learning. A total of 218 pregnant women with suspected placental abnormalities who underwent magnetic resonance imaging (MRI) were enrolled, yielding 1090 annotated images for developing a deep learning model for placental segmentation. The images were standardized and divided into training and test sets. The performance of Placental Segmentation Network (PlaNet-S), which integrates U-Net and SegNeXt within an ensemble framework, was assessed using Intersection over Union (IoU) and counting connected components (CCC) against the U-Net, U-Net + + , and DS-transUNet. PlaNet-S had significantly higher IoU (0.78, SD = 0.10) than that of U-Net (0.73, SD = 0.13) (p < 0.005) and DS-transUNet (0.64, SD = 0.16) (p < 0.005), while the difference with U-Net + + (0.77, SD = 0.12) was not statistically significant. The CCC for PlaNet-S was significantly higher than that for U-Net (p < 0.005), U-Net + + (p < 0.005), and DS-transUNet (p < 0.005), matching the ground truth in 86.0%, 56.7%, 67.9%, and 20.9% of the cases, respectively. PlaNet-S achieved higher IoU than U-Net and DS-transUNet, and comparable IoU to U-Net + + . Moreover, PlaNet-S significantly outperformed all three models in CCC, indicating better agreement with the ground truth. This model addresses the challenges of time-consuming physician-assisted manual segmentation and offers the potential for diverse applications in placental imaging analyses.
{"title":"PlaNet-S: an Automatic Semantic Segmentation Model for Placenta Using U-Net and SegNeXt.","authors":"Isso Saito, Shinnosuke Yamamoto, Eichi Takaya, Ayaka Harigai, Tomomi Sato, Tomoya Kobayashi, Kei Takase, Takuya Ueda","doi":"10.1007/s10278-025-01549-9","DOIUrl":"10.1007/s10278-025-01549-9","url":null,"abstract":"<p><p>This study aimed to develop a fully automated semantic placenta segmentation model that integrates the U-Net and SegNeXt architectures through ensemble learning. A total of 218 pregnant women with suspected placental abnormalities who underwent magnetic resonance imaging (MRI) were enrolled, yielding 1090 annotated images for developing a deep learning model for placental segmentation. The images were standardized and divided into training and test sets. The performance of Placental Segmentation Network (PlaNet-S), which integrates U-Net and SegNeXt within an ensemble framework, was assessed using Intersection over Union (IoU) and counting connected components (CCC) against the U-Net, U-Net + + , and DS-transUNet. PlaNet-S had significantly higher IoU (0.78, SD = 0.10) than that of U-Net (0.73, SD = 0.13) (p < 0.005) and DS-transUNet (0.64, SD = 0.16) (p < 0.005), while the difference with U-Net + + (0.77, SD = 0.12) was not statistically significant. The CCC for PlaNet-S was significantly higher than that for U-Net (p < 0.005), U-Net + + (p < 0.005), and DS-transUNet (p < 0.005), matching the ground truth in 86.0%, 56.7%, 67.9%, and 20.9% of the cases, respectively. PlaNet-S achieved higher IoU than U-Net and DS-transUNet, and comparable IoU to U-Net + + . Moreover, PlaNet-S significantly outperformed all three models in CCC, indicating better agreement with the ground truth. This model addresses the challenges of time-consuming physician-assisted manual segmentation and offers the potential for diverse applications in placental imaging analyses.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"400-410"},"PeriodicalIF":0.0,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144164333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01Epub Date: 2025-04-07DOI: 10.1007/s10278-025-01477-8
Niloufar Eghbali, Chad Klochko, Zaid Mahdi, Laith Alhiari, Jonathan Lee, Beatrice Knisely, Joseph Craig, Mohammad M Ghassemi
Insufficient clinical information provided in radiology requests, coupled with the cumbersome nature of electronic health records (EHRs), poses significant challenges for radiologists in extracting pertinent clinical data and compiling detailed radiology reports. Considering the challenges and time involved in navigating electronic medical records (EMR), an automated method to accurately compress the text while maintaining key semantic information could significantly enhance the efficiency of radiologists' workflow. The purpose of this study is to develop and demonstrate an automated tool for clinical note summarization with the goal of extracting the most pertinent clinical information for the radiological assessments. We adopted a transfer learning methodology from the natural language processing domain to fine-tune a transformer model for abstracting clinical reports. We employed a dataset consisting of 1000 clinical notes from 970 patients who underwent knee MRI, all manually summarized by radiologists. The fine-tuning process involved a two-stage approach starting with self-supervised denoising and then focusing on the summarization task. The model successfully condensed clinical notes by 97% while aligning closely with radiologist-written summaries evidenced by a 0.9 cosine similarity and a ROUGE-1 score of 40.18. In addition, statistical analysis, indicated by a Fleiss kappa score of 0.32, demonstrated fair agreement among specialists on the model's effectiveness in producing more relevant clinical histories compared to those included in the exam requests. The proposed model effectively summarized clinical notes for knee MRI studies, thereby demonstrating potential for improving radiology reporting efficiency and accuracy.
{"title":"Enhancing Radiology Clinical Histories Through Transformer-Based Automated Clinical Note Summarization.","authors":"Niloufar Eghbali, Chad Klochko, Zaid Mahdi, Laith Alhiari, Jonathan Lee, Beatrice Knisely, Joseph Craig, Mohammad M Ghassemi","doi":"10.1007/s10278-025-01477-8","DOIUrl":"10.1007/s10278-025-01477-8","url":null,"abstract":"<p><p>Insufficient clinical information provided in radiology requests, coupled with the cumbersome nature of electronic health records (EHRs), poses significant challenges for radiologists in extracting pertinent clinical data and compiling detailed radiology reports. Considering the challenges and time involved in navigating electronic medical records (EMR), an automated method to accurately compress the text while maintaining key semantic information could significantly enhance the efficiency of radiologists' workflow. The purpose of this study is to develop and demonstrate an automated tool for clinical note summarization with the goal of extracting the most pertinent clinical information for the radiological assessments. We adopted a transfer learning methodology from the natural language processing domain to fine-tune a transformer model for abstracting clinical reports. We employed a dataset consisting of 1000 clinical notes from 970 patients who underwent knee MRI, all manually summarized by radiologists. The fine-tuning process involved a two-stage approach starting with self-supervised denoising and then focusing on the summarization task. The model successfully condensed clinical notes by 97% while aligning closely with radiologist-written summaries evidenced by a 0.9 cosine similarity and a ROUGE-1 score of 40.18. In addition, statistical analysis, indicated by a Fleiss kappa score of 0.32, demonstrated fair agreement among specialists on the model's effectiveness in producing more relevant clinical histories compared to those included in the exam requests. The proposed model effectively summarized clinical notes for knee MRI studies, thereby demonstrating potential for improving radiology reporting efficiency and accuracy.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"1031-1039"},"PeriodicalIF":0.0,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143805120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01Epub Date: 2025-05-06DOI: 10.1007/s10278-025-01465-y
Mudassir Khan, Mazliham Mohd Su'ud, Muhammad Mansoor Alam, Shaik Karimullah, Fahimuddin Shaik, Fazli Subhan
Breast cancer has remained one of the most frequent and life-threatening cancers in females globally, putting emphasis on better diagnostics in its early stages to solve the problem of therapy effectiveness and survival. This work enhances the assessment of breast cancer by employing progressive residual networks (PRN) and ResNet-50 within the framework of Progressive Residual Multi-Class Support Vector Machine-Net. Built on concepts of deep learning, this creative integration optimizes feature extraction and raises the bar for classification effectiveness, earning an almost perfect 99.63% on our tests. These findings indicate that PRMS-Net can serve as an efficient and reliable diagnostic tool for early breast cancer detection, aiding radiologists in improving diagnostic accuracy and reducing false positives. The separation of the data into different segments is possible to determine the architecture's reliability using the fivefold cross-validation approach. The total variability of precision, recall, and F1 scores clearly depicted in the box plot also endorse the competency of the model for marking proper sensitivity and specificity-highly required for combating false positive and false negative cases in real clinical practice. The evaluation of error distribution strengthens the model's rationale by giving validation of practical application in medical contexts of image processing. The high levels of feature extraction sensitivity together with highly sophisticated classification methods make PRMS-Net a powerful tool that can be used in improving the early detection of breast cancer and subsequent patient prognosis.
{"title":"Enhancing Breast Cancer Detection Through Optimized Thermal Image Analysis Using PRMS-Net Deep Learning Approach.","authors":"Mudassir Khan, Mazliham Mohd Su'ud, Muhammad Mansoor Alam, Shaik Karimullah, Fahimuddin Shaik, Fazli Subhan","doi":"10.1007/s10278-025-01465-y","DOIUrl":"10.1007/s10278-025-01465-y","url":null,"abstract":"<p><p>Breast cancer has remained one of the most frequent and life-threatening cancers in females globally, putting emphasis on better diagnostics in its early stages to solve the problem of therapy effectiveness and survival. This work enhances the assessment of breast cancer by employing progressive residual networks (PRN) and ResNet-50 within the framework of Progressive Residual Multi-Class Support Vector Machine-Net. Built on concepts of deep learning, this creative integration optimizes feature extraction and raises the bar for classification effectiveness, earning an almost perfect 99.63% on our tests. These findings indicate that PRMS-Net can serve as an efficient and reliable diagnostic tool for early breast cancer detection, aiding radiologists in improving diagnostic accuracy and reducing false positives. The separation of the data into different segments is possible to determine the architecture's reliability using the fivefold cross-validation approach. The total variability of precision, recall, and F1 scores clearly depicted in the box plot also endorse the competency of the model for marking proper sensitivity and specificity-highly required for combating false positive and false negative cases in real clinical practice. The evaluation of error distribution strengthens the model's rationale by giving validation of practical application in medical contexts of image processing. The high levels of feature extraction sensitivity together with highly sophisticated classification methods make PRMS-Net a powerful tool that can be used in improving the early detection of breast cancer and subsequent patient prognosis.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"864-883"},"PeriodicalIF":0.0,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144056751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01Epub Date: 2025-04-24DOI: 10.1007/s10278-025-01511-9
Sho Maruyama
In medical image diagnosis, understanding image characteristics is crucial for selecting and optimizing imaging systems and advancing their development. Objective image quality assessments, based on specific diagnostic tasks, have become a standard in medical image analysis, bridging the gap between experimental observations and clinical applications. However, conventional task-based assessments often rely on ideal observer models that assume target signals have circular shapes with well-defined edges. This simplification rarely reflects the true complexity of lesion morphology, where edges exhibit variability. This study proposes a more practical approach by employing a Gaussian distribution to represent target signal shapes. This study explicitly derives the task function for Gaussian signals and evaluates the detectability index through simulations based on head computed tomography (CT) images with low-contrast lesions. Detectability indices were calculated for both circular and Gaussian signals using non-prewhitening and Hotelling observer models. The results demonstrate that Gaussian signals consistently exhibit lower detectability indices compared to circular signals, with differences becoming more pronounced for larger signal sizes. Simulated images closely resembling actual CT images confirm the validity of these calculations. These findings quantitatively clarify the influence of signal shape on detection performance, highlighting the limitations of conventional circular models. Thus, it provides a theoretical framework for task-based assessments in medical imaging, offering improved accuracy and clinical relevance for future advancements in the field.
{"title":"Gaussian Function Model for Task-Specific Evaluation in Medical Imaging: A Theoretical Investigation.","authors":"Sho Maruyama","doi":"10.1007/s10278-025-01511-9","DOIUrl":"10.1007/s10278-025-01511-9","url":null,"abstract":"<p><p>In medical image diagnosis, understanding image characteristics is crucial for selecting and optimizing imaging systems and advancing their development. Objective image quality assessments, based on specific diagnostic tasks, have become a standard in medical image analysis, bridging the gap between experimental observations and clinical applications. However, conventional task-based assessments often rely on ideal observer models that assume target signals have circular shapes with well-defined edges. This simplification rarely reflects the true complexity of lesion morphology, where edges exhibit variability. This study proposes a more practical approach by employing a Gaussian distribution to represent target signal shapes. This study explicitly derives the task function for Gaussian signals and evaluates the detectability index through simulations based on head computed tomography (CT) images with low-contrast lesions. Detectability indices were calculated for both circular and Gaussian signals using non-prewhitening and Hotelling observer models. The results demonstrate that Gaussian signals consistently exhibit lower detectability indices compared to circular signals, with differences becoming more pronounced for larger signal sizes. Simulated images closely resembling actual CT images confirm the validity of these calculations. These findings quantitatively clarify the influence of signal shape on detection performance, highlighting the limitations of conventional circular models. Thus, it provides a theoretical framework for task-based assessments in medical imaging, offering improved accuracy and clinical relevance for future advancements in the field.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"794-804"},"PeriodicalIF":0.0,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144047620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}