首页 > 最新文献

International Journal of Imaging Systems and Technology最新文献

英文 中文
Automatic Segmentation of the Outer and Inner Foveal Avascular Zone by Convolutional Filters 基于卷积滤波器的内外中央凹无血管区自动分割
IF 2.5 4区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2026-01-02 DOI: 10.1002/ima.70282
Carlos Ruiz-Tabuenca, Isabel Pinilla, Elvira Orduna-Hospital, Francisco Javier Salgado-Remacha

In this paper a new algorithm for segmentation of the foveal avascular zone in optical coherence tomography angiography images of the superficial capillary plexus is presented and evaluated. The algorithm is based on convolutional techniques, and for evaluation it has been compared with a collection of manual segmentations. Besides its performance, the main novelty presented is the ability to distinguish the purely avascular zone from the transitional environment whose importance has been recently pointed out. Its capability has been tested on images of patients with different types of diabetes mellitus, obtaining error rates between 1% and 1.5%. In addition, statistical data is shown for the segmented areas (including the transition zone, which had never been studied before) as a function of the type of diabetes. Moreover, a linear trend in outer and inner axis ratios is also observed. Overall, the algorithm represents a new approach in the analysis of optical coherence tomography angiography images, offering clinicians a new and reliable tool for objective foveal avascular zone segmentation of the superficial capillary plexus. Both the code and the dataset used are also made public in the cited repositories.

本文提出了一种新的分割浅毛细血管丛光学相干断层成像图像中央凹无血管区的算法,并对其进行了评价。该算法基于卷积技术,并与一组手动分割进行了比较。除了性能之外,提出的主要新颖之处是能够区分纯无血管带和过渡环境,这一点最近已被指出。它的能力已经在不同类型糖尿病患者的图像上进行了测试,错误率在1%到1.5%之间。此外,统计数据显示了分割区域(包括以前从未研究过的过渡区)作为糖尿病类型的函数。此外,还观察到外轴比和内轴比呈线性趋势。总的来说,该算法代表了光学相干断层血管造影图像分析的新方法,为临床医生提供了一种新的可靠工具,用于客观分割浅毛细血管丛的中央凹无血管区。所使用的代码和数据集也在引用的存储库中公开。
{"title":"Automatic Segmentation of the Outer and Inner Foveal Avascular Zone by Convolutional Filters","authors":"Carlos Ruiz-Tabuenca,&nbsp;Isabel Pinilla,&nbsp;Elvira Orduna-Hospital,&nbsp;Francisco Javier Salgado-Remacha","doi":"10.1002/ima.70282","DOIUrl":"https://doi.org/10.1002/ima.70282","url":null,"abstract":"<p>In this paper a new algorithm for segmentation of the foveal avascular zone in optical coherence tomography angiography images of the superficial capillary plexus is presented and evaluated. The algorithm is based on convolutional techniques, and for evaluation it has been compared with a collection of manual segmentations. Besides its performance, the main novelty presented is the ability to distinguish the purely avascular zone from the transitional environment whose importance has been recently pointed out. Its capability has been tested on images of patients with different types of diabetes mellitus, obtaining error rates between 1% and 1.5%. In addition, statistical data is shown for the segmented areas (including the transition zone, which had never been studied before) as a function of the type of diabetes. Moreover, a linear trend in outer and inner axis ratios is also observed. Overall, the algorithm represents a new approach in the analysis of optical coherence tomography angiography images, offering clinicians a new and reliable tool for objective foveal avascular zone segmentation of the superficial capillary plexus. Both the code and the dataset used are also made public in the cited repositories.</p>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"36 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2026-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ima.70282","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145909254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DenseUNet Architecture With Asymmetric Kernels for Automatic Segmentation of Medical Imaging 基于非对称核的医学图像自动分割的致密网结构
IF 2.5 4区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-12-30 DOI: 10.1002/ima.70277
F. Duque-Vazquez Edgar, Cruz-Aceves Ivan, E. Sanchez-Yanez Raul, Jonathan Cepeda-Negrete

Medical imaging is a core component of modern healthcare, essential for early disease diagnosis and effective treatment planning. Deep learning has emerged as a powerful tool to enhance medical image analysis, particularly in tasks such as segmentation, which is essential for identifying and delineating anatomical structures. A notable segmentation challenge involves accurately detecting narrow and elongated features. A novel DenseUNet architecture, enhanced with asymmetric convolutional kernels and a squeeze-and-excitation block, is proposed. It is specifically designed to adapt to such shape characteristics. The iterated local search metaheuristic is employed to optimize the kernel size within a search space of 152$$ {15}^2 $$, and a squeeze-and-excitation block is integrated to enhance feature recalibration and network efficiency. The best-performing asymmetric kernel achieved a processing time 5423 s faster than that of the conventional kernels. The proposed architecture is evaluated using the Dice coefficient and benchmarked against state-of-the-art architectures using three databases (TMJ: Temporomandibular joints, DCA1: Coronary arteries, and ICA: Coronary arteries), achieving Dice scores of 0.7800, 0.8231, and 0.8862, respectively. These enhancements demonstrate improved segmentation performance and contribute to the development of more accurate and robust medical imaging tools.

医学影像是现代医疗保健的核心组成部分,对疾病的早期诊断和有效的治疗计划至关重要。深度学习已经成为增强医学图像分析的强大工具,特别是在分割等任务中,这对于识别和描绘解剖结构至关重要。一个值得注意的分割挑战包括准确地检测狭窄和拉长的特征。提出了一种新的致密网结构,增强了非对称卷积核和压缩激励块。它是专门为适应这种形状特征而设计的。采用迭代局部搜索元启发式算法在15 2 $$ {15}^2 $$的搜索空间内优化核大小,并集成了压缩和激励块以提高特征重新校准和网络效率。性能最好的非对称内核的处理时间比传统内核快5423秒。使用Dice系数对所提出的架构进行评估,并使用三个数据库(TMJ:颞下颌关节,DCA1:冠状动脉和ICA:冠状动脉)对最先进的架构进行基准测试,分别获得了0.7800,0.8231和0.8862的Dice分数。这些增强展示了改进的分割性能,并有助于开发更准确和强大的医学成像工具。
{"title":"DenseUNet Architecture With Asymmetric Kernels for Automatic Segmentation of Medical Imaging","authors":"F. Duque-Vazquez Edgar,&nbsp;Cruz-Aceves Ivan,&nbsp;E. Sanchez-Yanez Raul,&nbsp;Jonathan Cepeda-Negrete","doi":"10.1002/ima.70277","DOIUrl":"https://doi.org/10.1002/ima.70277","url":null,"abstract":"<div>\u0000 \u0000 <p>Medical imaging is a core component of modern healthcare, essential for early disease diagnosis and effective treatment planning. Deep learning has emerged as a powerful tool to enhance medical image analysis, particularly in tasks such as segmentation, which is essential for identifying and delineating anatomical structures. A notable segmentation challenge involves accurately detecting narrow and elongated features. A novel DenseUNet architecture, enhanced with asymmetric convolutional kernels and a squeeze-and-excitation block, is proposed. It is specifically designed to adapt to such shape characteristics. The iterated local search metaheuristic is employed to optimize the kernel size within a search space of <span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <msup>\u0000 <mn>15</mn>\u0000 <mn>2</mn>\u0000 </msup>\u0000 </mrow>\u0000 <annotation>$$ {15}^2 $$</annotation>\u0000 </semantics></math>, and a squeeze-and-excitation block is integrated to enhance feature recalibration and network efficiency. The best-performing asymmetric kernel achieved a processing time 5423 s faster than that of the conventional kernels. The proposed architecture is evaluated using the Dice coefficient and benchmarked against state-of-the-art architectures using three databases (TMJ: Temporomandibular joints, DCA1: Coronary arteries, and ICA: Coronary arteries), achieving Dice scores of 0.7800, 0.8231, and 0.8862, respectively. These enhancements demonstrate improved segmentation performance and contribute to the development of more accurate and robust medical imaging tools.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"36 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2025-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145905256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Investigating the Correlation Between Ocular Diseases for Retinal Layer Fractal Dimensions Analysis Using Multiclass Segmentation With Attention U-Net 基于关注U-Net的多类分割视网膜层分维分析中眼部疾病相关性研究
IF 2.5 4区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-12-29 DOI: 10.1002/ima.70274
M. Saranya, K. A. Sunitha, A. Asuntha, Pratyusha Ganne

This study proposes a novel diagnostic approach to retinal disease detection by combining deep learning-based segmentation with fractal dimension (FD) analysis on optical coherence tomography (OCT) images. Our primary goal is to enhance early detection of retinal diseases such as age-related macular degeneration (AMD), diabetic retinopathy (DR), central serious retinopathy (CSR), and macular hole (MH). We introduce the Attention U-shaped Network (AUNet), which builds upon the UNet architecture with Attention Gates (AGs) to improve focus on pathological structures in complex cases, achieving a high segmentation accuracy of 98.5% and a mean Intersection over Union (mIoU) of 0.91, outperforming existing models like UNet++ and DeepLabV3+. Coupled with Fourier and Higuchi FD analysis, our method quantitatively assesses the complexity of retinal layers, identifying structural patterns that serve as early indicators of neural degeneration. Statistical tests reveal significant differences in FD values between diseased and healthy groups, underscoring the predictive power of retinal layers like the retinal pigment epithelium (RPE), inner nuclear layer (INL), outer nuclear layer (ONL), and ellipsoid zone (EZ). This combined AUNet-FD approach represents an innovative tool for early diagnosis of retinal diseases, potentially enhancing clinical decision-making through precise, non-invasive analysis.

本研究提出了一种新的视网膜疾病诊断方法,将基于深度学习的分割与光学相干断层扫描(OCT)图像的分形维数(FD)分析相结合。我们的主要目标是提高视网膜疾病的早期检测,如年龄相关性黄斑变性(AMD)、糖尿病性视网膜病变(DR)、中枢严重视网膜病变(CSR)和黄斑孔(MH)。我们引入了注意力u型网络(Attention U-shaped Network, AUNet),该网络基于UNet架构和注意力门(Attention Gates, AGs)来提高对复杂情况下病理结构的关注,实现了98.5%的高分割准确率和0.91的平均交集比(Intersection over Union, mIoU),优于UNet++和DeepLabV3+等现有模型。结合傅里叶和Higuchi FD分析,我们的方法定量评估视网膜层的复杂性,识别作为神经变性早期指标的结构模式。统计检验显示患病组和健康组之间FD值有显著差异,强调视网膜色素上皮(RPE)、内核层(INL)、外核层(ONL)和椭球带(EZ)等视网膜层的预测能力。这种联合AUNet-FD方法代表了一种早期诊断视网膜疾病的创新工具,有可能通过精确、非侵入性的分析来提高临床决策。
{"title":"Investigating the Correlation Between Ocular Diseases for Retinal Layer Fractal Dimensions Analysis Using Multiclass Segmentation With Attention U-Net","authors":"M. Saranya,&nbsp;K. A. Sunitha,&nbsp;A. Asuntha,&nbsp;Pratyusha Ganne","doi":"10.1002/ima.70274","DOIUrl":"https://doi.org/10.1002/ima.70274","url":null,"abstract":"<div>\u0000 \u0000 <p>This study proposes a novel diagnostic approach to retinal disease detection by combining deep learning-based segmentation with fractal dimension (FD) analysis on optical coherence tomography (OCT) images. Our primary goal is to enhance early detection of retinal diseases such as age-related macular degeneration (AMD), diabetic retinopathy (DR), central serious retinopathy (CSR), and macular hole (MH). We introduce the Attention U-shaped Network (AUNet), which builds upon the UNet architecture with Attention Gates (AGs) to improve focus on pathological structures in complex cases, achieving a high segmentation accuracy of 98.5% and a mean Intersection over Union (mIoU) of 0.91, outperforming existing models like UNet++ and DeepLabV3+. Coupled with Fourier and Higuchi FD analysis, our method quantitatively assesses the complexity of retinal layers, identifying structural patterns that serve as early indicators of neural degeneration. Statistical tests reveal significant differences in FD values between diseased and healthy groups, underscoring the predictive power of retinal layers like the retinal pigment epithelium (RPE), inner nuclear layer (INL), outer nuclear layer (ONL), and ellipsoid zone (EZ). This combined AUNet-FD approach represents an innovative tool for early diagnosis of retinal diseases, potentially enhancing clinical decision-making through precise, non-invasive analysis.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"36 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2025-12-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145905245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Comprehensive Deep-Learning Framework Integrating Lesion Segmentation and Stage Classification for Enhanced Diabetic Retinopathy Diagnosis 结合病变分割和分期分类的综合深度学习框架增强糖尿病视网膜病变诊断
IF 2.5 4区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-12-28 DOI: 10.1002/ima.70272
Ramazan İncir, Ferhat Bozkurt

Diabetic retinopathy (DR), one of the most prevalent microvascular complications of diabetes, stands as a leading cause of vision loss globally. Due to its asymptomatic nature in early stages, delayed diagnosis and staging may result in irreversible visual impairment. Therefore, accurate and simultaneous lesion segmentation and stage classification of DR are of critical clinical importance. In this study, a two-stage, end-to-end, holistic framework is proposed for automated DR diagnosis. In the first stage, an Improved U-Net architecture enhanced with residual blocks and additional convolutional layers is employed to segment small and low-contrast lesions such as microaneurysms, hemorrhages, and hard/soft exudates with high precision. Model hyperparameters are optimized using the harmony search algorithm to enhance training efficiency. In the second stage, lesion-based weight maps obtained from the segmentation step are applied to fundus images from the APTOS dataset, generating enriched inputs for classification. A vision transformer (ViT)-based model, augmented with a Convolutional Block Attention Module (CBAM), is utilized to improve feature extraction. In addition, features derived from ViT are further refined using a graph convolutional network (GCN) and traditional machine-learning classifiers. The proposed approach achieves high performance in multi-class DR stage classification. Compared to existing studies, the framework demonstrates notable improvements in both segmentation and classification accuracy, offering a robust and generalizable solution for DR diagnosis.

糖尿病视网膜病变(DR)是糖尿病最常见的微血管并发症之一,是全球视力丧失的主要原因。由于其早期无症状,延误诊断和分期可能导致不可逆的视力损害。因此,准确、同步的病变分割和DR分期具有重要的临床意义。在这项研究中,提出了一个两阶段,端到端的整体框架,用于自动诊断DR。在第一阶段,采用改进的U-Net结构,增强了残余块和额外的卷积层,以高精度分割小的和低对比度的病变,如微动脉瘤、出血和硬/软渗出物。采用和谐搜索算法优化模型超参数,提高训练效率。第二阶段,将分割步骤得到的基于病灶的权重图应用于APTOS数据集中的眼底图像,生成丰富的分类输入。利用基于视觉变换(ViT)的模型,增强卷积块注意模块(CBAM)来改进特征提取。此外,使用图卷积网络(GCN)和传统的机器学习分类器进一步细化从ViT派生的特征。该方法在多类DR阶段分类中具有较高的性能。与现有研究相比,该框架在分割和分类精度方面都有显著提高,为DR诊断提供了鲁棒性和可泛化的解决方案。
{"title":"A Comprehensive Deep-Learning Framework Integrating Lesion Segmentation and Stage Classification for Enhanced Diabetic Retinopathy Diagnosis","authors":"Ramazan İncir,&nbsp;Ferhat Bozkurt","doi":"10.1002/ima.70272","DOIUrl":"https://doi.org/10.1002/ima.70272","url":null,"abstract":"<div>\u0000 \u0000 <p>Diabetic retinopathy (DR), one of the most prevalent microvascular complications of diabetes, stands as a leading cause of vision loss globally. Due to its asymptomatic nature in early stages, delayed diagnosis and staging may result in irreversible visual impairment. Therefore, accurate and simultaneous lesion segmentation and stage classification of DR are of critical clinical importance. In this study, a two-stage, end-to-end, holistic framework is proposed for automated DR diagnosis. In the first stage, an Improved U-Net architecture enhanced with residual blocks and additional convolutional layers is employed to segment small and low-contrast lesions such as microaneurysms, hemorrhages, and hard/soft exudates with high precision. Model hyperparameters are optimized using the harmony search algorithm to enhance training efficiency. In the second stage, lesion-based weight maps obtained from the segmentation step are applied to fundus images from the APTOS dataset, generating enriched inputs for classification. A vision transformer (ViT)-based model, augmented with a Convolutional Block Attention Module (CBAM), is utilized to improve feature extraction. In addition, features derived from ViT are further refined using a graph convolutional network (GCN) and traditional machine-learning classifiers. The proposed approach achieves high performance in multi-class DR stage classification. Compared to existing studies, the framework demonstrates notable improvements in both segmentation and classification accuracy, offering a robust and generalizable solution for DR diagnosis.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"36 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2025-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145904992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EGCF-Net Edge-Aware Graph-Based Attention Network With Enhanced Contextual Features for Medical Image Segmentation 基于增强上下文特征的EGCF-Net边缘感知图关注网络用于医学图像分割
IF 2.5 4区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-12-28 DOI: 10.1002/ima.70276
Santoshi Gorli, Ratnakar Dash

Accurate segmentation of medical images is critical for disease diagnosis and treatment planning. However, challenges such as fuzzy boundaries, low contrast, and complex anatomical structures often hinder performance. We propose EGCF-Net, a novel U-Net-based architecture that integrates a hybrid encoder with an edge-aware graph-based attention network to address these limitations. The hybrid encoder integrates the Swin transformer and Kronecker convolution to capture global and local contextual dependencies. Additionally, skip connections are enhanced using an edge-aware graph-based attention module, which combines graph spatial attention and graph channel attention to dynamically model spatial correlations and edge-aware contextual affinities. This design leads to edge-enhanced boundary delineation and improved regional consistency. We evaluate EGCF-Net on four benchmark datasets (Synapse, Kidney Stone, ISIC 2016, and ISIC 2018), achieving Dice scores of 84.70%, 93.07%, 91.07%, and 88.62%, respectively, surpassing existing state-of-the-art methods. Quantitative and qualitative results further validate the efficacy and robustness of the proposed approach, highlighting its potential for advancing medical image segmentation.

医学图像的准确分割是疾病诊断和治疗计划的关键。然而,诸如模糊边界、低对比度和复杂解剖结构等挑战往往会阻碍性能。我们提出了EGCF-Net,这是一种新颖的基于u - net的架构,它将混合编码器与基于边缘感知图的注意力网络集成在一起,以解决这些限制。混合编码器集成了Swin变压器和Kronecker卷积,以捕获全局和局部上下文依赖关系。此外,使用基于边缘感知的图形注意模块增强了跳过连接,该模块结合了图形空间注意和图形通道注意来动态建模空间相关性和边缘感知上下文亲和力。这种设计导致边缘增强边界划定和提高区域一致性。我们在四个基准数据集(Synapse, Kidney Stone, ISIC 2016和ISIC 2018)上对EGCF-Net进行了评估,分别获得了84.70%,93.07%,91.07%和88.62%的Dice分数,超过了现有的最先进的方法。定量和定性结果进一步验证了所提出方法的有效性和鲁棒性,突出了其在推进医学图像分割方面的潜力。
{"title":"EGCF-Net Edge-Aware Graph-Based Attention Network With Enhanced Contextual Features for Medical Image Segmentation","authors":"Santoshi Gorli,&nbsp;Ratnakar Dash","doi":"10.1002/ima.70276","DOIUrl":"https://doi.org/10.1002/ima.70276","url":null,"abstract":"<div>\u0000 \u0000 <p>Accurate segmentation of medical images is critical for disease diagnosis and treatment planning. However, challenges such as fuzzy boundaries, low contrast, and complex anatomical structures often hinder performance. We propose EGCF-Net, a novel U-Net-based architecture that integrates a hybrid encoder with an edge-aware graph-based attention network to address these limitations. The hybrid encoder integrates the Swin transformer and Kronecker convolution to capture global and local contextual dependencies. Additionally, skip connections are enhanced using an edge-aware graph-based attention module, which combines graph spatial attention and graph channel attention to dynamically model spatial correlations and edge-aware contextual affinities. This design leads to edge-enhanced boundary delineation and improved regional consistency. We evaluate EGCF-Net on four benchmark datasets (Synapse, Kidney Stone, ISIC 2016, and ISIC 2018), achieving Dice scores of 84.70%, 93.07%, 91.07%, and 88.62%, respectively, surpassing existing state-of-the-art methods. Quantitative and qualitative results further validate the efficacy and robustness of the proposed approach, highlighting its potential for advancing medical image segmentation.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"36 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2025-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145904991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
StackDeVNet: An Explainable Stacking Ensemble of DenseNets and Vision Transformers for Advanced Gastrointestinal Disease Detection 用于高级胃肠道疾病检测的可解释的densenet和视觉变压器堆叠集成
IF 2.5 4区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-12-26 DOI: 10.1002/ima.70275
Osman Güler

Gastrointestinal disorders include diseases that negatively affect people's daily life and carry the risk of cancer. Therefore, accurate and early diagnosis of these diseases is important for treatment process of patients. Deep learning architectures, which have achieved significant success in medical image analysis, are effectively used in early diagnosis and diagnosis systems. Therefore, in this study, a new approach that achieves higher accuracy in the detection of gastrointestinal diseases by combining DenseNet and Vision Transformer models with stacking ensemble is proposed. As a result of the experiments, the proposed model achieved 99.06% accuracy in a single test and 98.64% accuracy on mean as a result of 5-fold cross-validation. The proposed approach shows promising accuracy and reliability as evidenced by the results of experiments on the KvasirV2 dataset, and has the potential to be an effective method for the detection of gastrointestinal diseases. To improve model interpretability, the Explainable AI technique Grad-CAM and attention map visualizations were used, allowing visual justification of the model's predictions and highlighting clinically relevant regions in endoscopic images. The model obtained by combining DenseNet and Vision Transformer models with the stacking ensemble method is expected to be an example for future studies in the field of health and image processing, especially gastrointestinal diseases.

胃肠道疾病包括对人们日常生活产生负面影响并具有癌症风险的疾病。因此,准确、早期诊断这些疾病对患者的治疗过程至关重要。深度学习架构在医学图像分析方面取得了显著的成功,可以有效地应用于早期诊断和诊断系统。因此,本研究提出了一种将DenseNet和Vision Transformer模型与堆叠集成相结合的方法,以达到更高的胃肠道疾病检测精度。实验结果表明,该模型在单次测试中准确率达到99.06%,经过5次交叉验证,平均准确率达到98.64%。KvasirV2数据集上的实验结果表明,该方法具有良好的准确性和可靠性,有可能成为检测胃肠道疾病的有效方法。为了提高模型的可解释性,使用了Explainable AI技术Grad-CAM和注意图可视化,允许对模型的预测进行视觉证明,并在内镜图像中突出显示临床相关区域。将DenseNet和Vision Transformer模型与叠加集成方法相结合得到的模型有望成为未来健康和图像处理领域,特别是胃肠道疾病领域研究的范例。
{"title":"StackDeVNet: An Explainable Stacking Ensemble of DenseNets and Vision Transformers for Advanced Gastrointestinal Disease Detection","authors":"Osman Güler","doi":"10.1002/ima.70275","DOIUrl":"https://doi.org/10.1002/ima.70275","url":null,"abstract":"<div>\u0000 \u0000 <p>Gastrointestinal disorders include diseases that negatively affect people's daily life and carry the risk of cancer. Therefore, accurate and early diagnosis of these diseases is important for treatment process of patients. Deep learning architectures, which have achieved significant success in medical image analysis, are effectively used in early diagnosis and diagnosis systems. Therefore, in this study, a new approach that achieves higher accuracy in the detection of gastrointestinal diseases by combining DenseNet and Vision Transformer models with stacking ensemble is proposed. As a result of the experiments, the proposed model achieved 99.06% accuracy in a single test and 98.64% accuracy on mean as a result of 5-fold cross-validation. The proposed approach shows promising accuracy and reliability as evidenced by the results of experiments on the KvasirV2 dataset, and has the potential to be an effective method for the detection of gastrointestinal diseases. To improve model interpretability, the Explainable AI technique Grad-CAM and attention map visualizations were used, allowing visual justification of the model's predictions and highlighting clinically relevant regions in endoscopic images. The model obtained by combining DenseNet and Vision Transformer models with the stacking ensemble method is expected to be an example for future studies in the field of health and image processing, especially gastrointestinal diseases.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"36 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2025-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145885036","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Attention-Guided Deep Learning Approach for Classifying 39 Skin Lesion Types 基于注意力引导的深度学习方法对39种皮肤病变类型进行分类
IF 2.5 4区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-12-22 DOI: 10.1002/ima.70269
Sauda Adiv Hanum, Ashim Dey, Muhammad Ashad Kabir

The skin, the largest organ of the human body, is vulnerable to numerous pathological conditions collectively referred to as skin lesions, encompassing a wide spectrum of dermatoses. Diagnosing these lesions remains challenging for medical practitioners due to their subtle visual differences, many of which are imperceptible to the naked eye. While not all lesions are malignant, some serve as early indicators of serious diseases such as skin cancer, emphasizing the urgent need for accurate and timely diagnostic tools. This study advances dermatological diagnostics by curating a comprehensive and balanced dataset containing 9360 dermoscopic and clinical images across 39 lesion categories, synthesized from five publicly available datasets. Five state-of-the-art deep learning architectures—MobileNetV2, Xception, InceptionV3, EfficientNetB1, and Vision Transformer (ViT)—were systematically evaluated on this dataset. To enhance model precision and robustness, Efficient Channel Attention (ECA) and Convolutional Block Attention Module (CBAM) mechanisms were integrated into these architectures. Extensive evaluation across multiple performance metrics demonstrated that the Vision Transformer with CBAM achieved the best results, with 93.46% accuracy, 94% precision, 93% recall, 93% F1-score, and 93.67% specificity. These findings highlight the effectiveness of attention-guided Vision Transformers in addressing complex, large-scale, multi-class skin lesion classification. By combining dataset diversity with advanced attention mechanisms, the proposed framework provides a reliable and interpretable tool to assist medical professionals in accurate and efficient lesion diagnosis, thereby contributing to improved clinical decision-making and patient outcomes.

皮肤是人体最大的器官,容易受到许多病理状况的影响,统称为皮肤病变,包括广泛的皮肤病。诊断这些病变仍然具有挑战性的医疗从业者,由于他们的细微的视觉差异,其中许多是难以察觉的肉眼。虽然并非所有病变都是恶性的,但有些病变可以作为皮肤癌等严重疾病的早期指标,强调迫切需要准确和及时的诊断工具。本研究通过整理一个全面而平衡的数据集,包括39个病变类别的9360张皮肤镜和临床图像,从5个公开的数据集合成,从而推进了皮肤科诊断。五个最先进的深度学习架构——mobilenetv2、Xception、InceptionV3、EfficientNetB1和Vision Transformer (ViT)——在这个数据集上进行了系统评估。为了提高模型的精度和鲁棒性,将高效通道注意(ECA)和卷积块注意模块(CBAM)机制集成到这些体系结构中。对多个性能指标的广泛评估表明,带有CBAM的Vision Transformer达到了最佳效果,准确率为93.46%,精密度为94%,召回率为93%,f1评分为93%,特异性为93.67%。这些发现强调了注意引导视觉变形在处理复杂、大规模、多类别皮肤病变分类中的有效性。通过将数据集多样性与先进的注意机制相结合,所提出的框架提供了一个可靠且可解释的工具,以帮助医疗专业人员准确有效地诊断病变,从而有助于改善临床决策和患者预后。
{"title":"An Attention-Guided Deep Learning Approach for Classifying 39 Skin Lesion Types","authors":"Sauda Adiv Hanum,&nbsp;Ashim Dey,&nbsp;Muhammad Ashad Kabir","doi":"10.1002/ima.70269","DOIUrl":"https://doi.org/10.1002/ima.70269","url":null,"abstract":"<div>\u0000 \u0000 <p>The skin, the largest organ of the human body, is vulnerable to numerous pathological conditions collectively referred to as skin lesions, encompassing a wide spectrum of dermatoses. Diagnosing these lesions remains challenging for medical practitioners due to their subtle visual differences, many of which are imperceptible to the naked eye. While not all lesions are malignant, some serve as early indicators of serious diseases such as skin cancer, emphasizing the urgent need for accurate and timely diagnostic tools. This study advances dermatological diagnostics by curating a comprehensive and balanced dataset containing 9360 dermoscopic and clinical images across 39 lesion categories, synthesized from five publicly available datasets. Five state-of-the-art deep learning architectures—MobileNetV2, Xception, InceptionV3, EfficientNetB1, and Vision Transformer (ViT)—were systematically evaluated on this dataset. To enhance model precision and robustness, Efficient Channel Attention (ECA) and Convolutional Block Attention Module (CBAM) mechanisms were integrated into these architectures. Extensive evaluation across multiple performance metrics demonstrated that the Vision Transformer with CBAM achieved the best results, with 93.46% accuracy, 94% precision, 93% recall, 93% F1-score, and 93.67% specificity. These findings highlight the effectiveness of attention-guided Vision Transformers in addressing complex, large-scale, multi-class skin lesion classification. By combining dataset diversity with advanced attention mechanisms, the proposed framework provides a reliable and interpretable tool to assist medical professionals in accurate and efficient lesion diagnosis, thereby contributing to improved clinical decision-making and patient outcomes.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"36 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2025-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145825131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comprehensive Multitask Ensemble Segmentation and Clinical Interpretation of Pancreatic and Peripancreatic Anatomy With Radiomics and Deep Learning Features 基于放射组学和深度学习特征的胰腺和胰腺周围解剖的综合多任务集成分割和临床解释
IF 2.5 4区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-12-20 DOI: 10.1002/ima.70270
Ming Jiang, Jinye Hu, Jie Zheng, Jin Wang, Xiaohui Ye

To develop and validate a multitask deep learning framework for the simultaneous segmentation and clinical classification of pancreatic and peripancreatic anatomical structures in contrast-enhanced CT imaging, enabling robust, automated diagnostic assessment and TNM staging. In this retrospective multicenter study, 3019 contrast-enhanced abdominal CT scans from patients with confirmed or suspected pancreatic disease were analyzed. Six anatomical structures were manually annotated: tumor, parenchyma, pancreatic duct, common bile duct, peripancreatic veins, and arteries. An ensemble model combining nnU-Net, TransUNet, and Swin-UNet was trained for segmentation. Post-segmentation, 215 radiomic and 2560 deep features were extracted and filtered via ICC, correlation, and harmonization procedures. Feature selection was performed using LASSO, MI, and ANOVA. Clinical classification was conducted using XGBoost, MLP, and TabTransformer models. Performance was evaluated through five-fold cross-validation and tested on independent internal and external datasets. The ensemble model achieved high segmentation accuracy (mean DSC: 0.89–0.94 across structures) and superior boundary precision (HD95: < 3 mm). For classification tasks, the best-performing models attained AUCs of 95.5% for tumor malignancy, 94.7% for parenchymal condition, 94.8% for ductal status, and 94.1% for vessel invasion. Feature reproducibility was confirmed with ICC ≥ 0.75 for 198 radiomic and 2112 deep features. External validation confirmed high accuracy and generalizability, with minimal performance degradation across clinical sites. Our multitask AI framework offers comprehensive and clinically actionable insights from CT imaging, combining precise anatomical segmentation with diagnostic classification. The system supports automated TNM staging and demonstrates strong potential for clinical integration. Future studies should explore multimodal imaging, longitudinal data, and genomics integration to further expand its diagnostic and prognostic capabilities.

开发并验证一个多任务深度学习框架,用于对比增强CT成像中胰腺和胰腺周围解剖结构的同时分割和临床分类,实现稳健的自动诊断评估和TNM分期。在这项回顾性多中心研究中,分析了3019例确诊或疑似胰腺疾病患者的腹部CT增强扫描。人工注释6个解剖结构:肿瘤、实质、胰管、胆总管、胰周静脉和动脉。结合nnU-Net、TransUNet和swan - unet的集成模型进行分割训练。分割后,通过ICC、相关和协调程序提取和过滤215个放射性特征和2560个深度特征。特征选择采用LASSO、MI和方差分析。采用XGBoost、MLP、TabTransformer模型进行临床分型。通过五倍交叉验证评估性能,并在独立的内部和外部数据集上进行测试。该集成模型具有较高的分割精度(跨结构平均DSC: 0.89-0.94)和较好的边界精度(HD95: 3 mm)。对于分类任务,表现最好的模型的肿瘤恶性auc为95.5%,实质情况为94.7%,导管状态为94.8%,血管侵犯为94.1%。198个放射学特征和2112个深部特征的ICC≥0.75证实了特征的再现性。外部验证证实了高准确性和通用性,在临床场所的性能下降最小。我们的多任务人工智能框架从CT成像中提供全面和临床可操作的见解,将精确的解剖分割与诊断分类相结合。该系统支持自动TNM分期,并显示出强大的临床整合潜力。未来的研究应探索多模式成像、纵向数据和基因组学整合,以进一步扩大其诊断和预后能力。
{"title":"Comprehensive Multitask Ensemble Segmentation and Clinical Interpretation of Pancreatic and Peripancreatic Anatomy With Radiomics and Deep Learning Features","authors":"Ming Jiang,&nbsp;Jinye Hu,&nbsp;Jie Zheng,&nbsp;Jin Wang,&nbsp;Xiaohui Ye","doi":"10.1002/ima.70270","DOIUrl":"https://doi.org/10.1002/ima.70270","url":null,"abstract":"<p>To develop and validate a multitask deep learning framework for the simultaneous segmentation and clinical classification of pancreatic and peripancreatic anatomical structures in contrast-enhanced CT imaging, enabling robust, automated diagnostic assessment and TNM staging. In this retrospective multicenter study, 3019 contrast-enhanced abdominal CT scans from patients with confirmed or suspected pancreatic disease were analyzed. Six anatomical structures were manually annotated: tumor, parenchyma, pancreatic duct, common bile duct, peripancreatic veins, and arteries. An ensemble model combining nnU-Net, TransUNet, and Swin-UNet was trained for segmentation. Post-segmentation, 215 radiomic and 2560 deep features were extracted and filtered via ICC, correlation, and harmonization procedures. Feature selection was performed using LASSO, MI, and ANOVA. Clinical classification was conducted using XGBoost, MLP, and TabTransformer models. Performance was evaluated through five-fold cross-validation and tested on independent internal and external datasets. The ensemble model achieved high segmentation accuracy (mean DSC: 0.89–0.94 across structures) and superior boundary precision (HD95: &lt; 3 mm). For classification tasks, the best-performing models attained AUCs of 95.5% for tumor malignancy, 94.7% for parenchymal condition, 94.8% for ductal status, and 94.1% for vessel invasion. Feature reproducibility was confirmed with ICC ≥ 0.75 for 198 radiomic and 2112 deep features. External validation confirmed high accuracy and generalizability, with minimal performance degradation across clinical sites. Our multitask AI framework offers comprehensive and clinically actionable insights from CT imaging, combining precise anatomical segmentation with diagnostic classification. The system supports automated TNM staging and demonstrates strong potential for clinical integration. Future studies should explore multimodal imaging, longitudinal data, and genomics integration to further expand its diagnostic and prognostic capabilities.</p>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"36 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2025-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ima.70270","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145824901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Deep Learning-Based DenseEchoNet Framework With eXplainable Artificial Intelligence for Accurate and Early Heart Disease Prediction 基于深度学习的密集回声网络框架与可解释的人工智能,用于准确和早期的心脏病预测
IF 2.5 4区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-12-19 DOI: 10.1002/ima.70268
Meghavathu S. S. Nayak, Hussain Syed

Heart disease (HD) is still a major cause of death worldwide, which emphasizes the importance of early and precise prediction. This paper presents DenseEchoNet, a deep learning model that has been optimized with the Gazelle Optimizer Algorithm (GOA). The hybrid HD-ENN technique is used for balanced learning to solve class imbalance and high dimensionality, while squared exponential kernel-based PCA (SEKPCA) effectively reduces dimensionality. DenseEchoNet outperforms current baseline models with accuracies of 0.9795 as well as 0.9785, respectively, when tested on the HDHI and Cleveland datasets. XAI approaches, such as LIME and SHAP, improve model interpretability by offering distinct insights on feature contributions to HD risk. For early HD prediction, this system provides a straightforward, accurate, and efficient solution.

心脏病(HD)仍然是世界范围内死亡的主要原因,这强调了早期和精确预测的重要性。本文介绍了DenseEchoNet,这是一个用Gazelle优化算法(GOA)优化的深度学习模型。混合HD-ENN技术用于平衡学习,解决了类不平衡和高维问题,而基于平方指数核的PCA (SEKPCA)有效地降低了维数。当在HDHI和Cleveland数据集上测试时,DenseEchoNet的精度分别为0.9795和0.9785,优于当前的基线模型。XAI方法,如LIME和SHAP,通过提供对HD风险的特征贡献的独特见解,提高了模型的可解释性。对于HD的早期预测,该系统提供了一种简单、准确、高效的解决方案。
{"title":"A Deep Learning-Based DenseEchoNet Framework With eXplainable Artificial Intelligence for Accurate and Early Heart Disease Prediction","authors":"Meghavathu S. S. Nayak,&nbsp;Hussain Syed","doi":"10.1002/ima.70268","DOIUrl":"https://doi.org/10.1002/ima.70268","url":null,"abstract":"<div>\u0000 \u0000 <p>Heart disease (HD) is still a major cause of death worldwide, which emphasizes the importance of early and precise prediction. This paper presents DenseEchoNet, a deep learning model that has been optimized with the Gazelle Optimizer Algorithm (GOA). The hybrid HD-ENN technique is used for balanced learning to solve class imbalance and high dimensionality, while squared exponential kernel-based PCA (SEKPCA) effectively reduces dimensionality. DenseEchoNet outperforms current baseline models with accuracies of 0.9795 as well as 0.9785, respectively, when tested on the HDHI and Cleveland datasets. XAI approaches, such as LIME and SHAP, improve model interpretability by offering distinct insights on feature contributions to HD risk. For early HD prediction, this system provides a straightforward, accurate, and efficient solution.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"36 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2025-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145824666","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Application of MLEM-TV Algorithm in Diffuse Correlation Tomography Blood Flow Imaging MLEM-TV算法在弥散相关断层血流成像中的应用
IF 2.5 4区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2025-12-10 DOI: 10.1002/ima.70267
Zicheng Li, Dalin Cheng, Juanjuan Shen, Xiaojuan Zhang

Diffuse correlation tomography (DCT) reconstructs the motion velocity of scatterers (blood flow index, BFI) within biological tissues by using information from the escaped photons. Given the randomness of a single scatterer motion and the determinism of particle swarm, the motion of scatterers was analyzed for the first time using a probabilistic-statistical method. This study applied the maximum likelihood expectation maximization (MLEM) algorithm for DCT, integrating a total variation (TV) regularization model as a constraint to enhance BFI reconstruction. In simulation, the mean absolute error (MAE) of the cross-shaped anomaly, reconstructed from the noise-free and noisy autocorrelation function g1(τ), was 0.0962 and 0.1831, with the corresponding contrast of 8.25 and 6.42, respectively. For the two-dot anomaly, the MAE was 0.0293 and 0.0452, with the corresponding contrast of 4.07 and 3.08, respectively. In phantom experiments, the contrast of the cross-shaped anomaly was 0.59. For the controllable velocity tubular anomaly, the contrast (1.42, 1.95, and 2.49) is gradually enhanced as the pump speed is elevated. Clinical tests of calf skeletal muscle revealed approximately tenfold higher BFI in the relaxed state than in the cuff occlusion state. The result demonstrates that the MLEM-TV algorithm can be an alternative algorithm for BFI reconstruction, with potential applications for detecting abnormal blood flow perfusion in cerebral, breast, and skeletal muscle pathology.

漫射相关断层扫描(DCT)利用逸出光子的信息重建生物组织内散射体(血流指数,BFI)的运动速度。考虑到单个散射体运动的随机性和粒子群的确定性,首次采用概率统计方法对散射体运动进行了分析。本研究将最大似然期望最大化(MLEM)算法应用于DCT,整合一个总变分(TV)正则化模型作为约束来增强BFI重建。在模拟中,由无噪声和有噪声的自相关函数g1(τ)重建的十字形异常的平均绝对误差(MAE)分别为0.0962和0.1831,对应的对比度分别为8.25和6.42。两点异常的MAE分别为0.0293和0.0452,对应的对比分别为4.07和3.08。在幻像实验中,十字形异常的对比度为0.59。对于可控速度管状异常,随着泵转速的提高,对比值(1.42、1.95、2.49)逐渐增强。小腿骨骼肌的临床试验显示,放松状态下的BFI比袖带闭塞状态高约10倍。结果表明,MLEM-TV算法可以作为BFI重建的替代算法,在检测脑、乳腺和骨骼肌病理异常血流灌注方面具有潜在的应用前景。
{"title":"Application of MLEM-TV Algorithm in Diffuse Correlation Tomography Blood Flow Imaging","authors":"Zicheng Li,&nbsp;Dalin Cheng,&nbsp;Juanjuan Shen,&nbsp;Xiaojuan Zhang","doi":"10.1002/ima.70267","DOIUrl":"https://doi.org/10.1002/ima.70267","url":null,"abstract":"<p>Diffuse correlation tomography (DCT) reconstructs the motion velocity of scatterers (blood flow index, BFI) within biological tissues by using information from the escaped photons. Given the randomness of a single scatterer motion and the determinism of particle swarm, the motion of scatterers was analyzed for the first time using a probabilistic-statistical method. This study applied the maximum likelihood expectation maximization (MLEM) algorithm for DCT, integrating a total variation (TV) regularization model as a constraint to enhance BFI reconstruction. In simulation, the mean absolute error (MAE) of the <i>cross-shaped</i> anomaly, reconstructed from the noise-free and noisy autocorrelation function <i>g</i><sub>1</sub>(<i>τ</i>), was 0.0962 and 0.1831, with the corresponding contrast of 8.25 and 6.42, respectively. For the <i>two-dot</i> anomaly, the MAE was 0.0293 and 0.0452, with the corresponding contrast of 4.07 and 3.08, respectively. In phantom experiments, the contrast of the <i>cross-shaped</i> anomaly was 0.59. For the controllable velocity <i>tubular</i> anomaly, the contrast (1.42, 1.95, and 2.49) is gradually enhanced as the pump speed is elevated. Clinical tests of calf skeletal muscle revealed approximately tenfold higher BFI in the relaxed state than in the cuff occlusion state. The result demonstrates that the MLEM-TV algorithm can be an alternative algorithm for BFI reconstruction, with potential applications for detecting abnormal blood flow perfusion in cerebral, breast, and skeletal muscle pathology.</p>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"36 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2025-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ima.70267","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145750945","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
International Journal of Imaging Systems and Technology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1