首页 > 最新文献

International Journal of Imaging Systems and Technology最新文献

英文 中文
Deep Learning and Handcrafted Features for Thyroid Nodule Classification 深度学习和人工特征用于甲状腺结节分类
IF 3 4区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-11-08 DOI: 10.1002/ima.23215
Ayoub Abderrazak Maarouf, Hacini meriem, Fella Hachouf

In this research, we present a refined image-based computer-aided diagnosis (CAD) system for thyroid cancer detection using ultrasound imagery. This system integrates a specialized convolutional neural network (CNN) architecture designed to address the unique aspects of thyroid image datasets. Additionally, it incorporates a novel statistical model that utilizes a two-dimensional random coefficient autoregressive (2D-RCA) method to precisely analyze the textural characteristics of thyroid images, thereby capturing essential texture-related information. The classification framework relies on a composite feature vector that combines deep learning features from the CNN and handcrafted features from the 2D-RCA model, processed through a support vector machine (SVM) algorithm. Our evaluation methodology is structured in three phases: initial assessment of the 2D-RCA features, analysis of the CNN-derived features, and a final evaluation of their combined effect on classification performance. Comparative analyses with well-known networks such as VGG16, VGG19, ResNet50, and AlexNet highlight the superior performance of our approach. The outcomes indicate a significant enhancement in diagnostic accuracy, achieving a classification accuracy of 97.2%, a sensitivity of 84.42%, and a specificity of 95.23%. These results not only demonstrate a notable advancement in the classification of thyroid nodules but also establish a new standard in the efficiency of CAD systems.

在这项研究中,我们提出了一种基于图像的计算机辅助诊断(CAD)系统,利用超声图像检测甲状腺癌。该系统集成了专门的卷积神经网络(CNN)架构,旨在解决甲状腺图像数据集的独特问题。此外,它还采用了一种新型统计模型,利用二维随机系数自回归(2D-RCA)方法精确分析甲状腺图像的纹理特征,从而捕捉到与纹理相关的重要信息。分类框架依赖于一个复合特征向量,该向量结合了 CNN 的深度学习特征和 2D-RCA 模型的手工特征,并通过支持向量机 (SVM) 算法进行处理。我们的评估方法分为三个阶段:初步评估 2D-RCA 特征、分析 CNN 衍生特征,以及最终评估它们对分类性能的综合影响。与 VGG16、VGG19、ResNet50 和 AlexNet 等知名网络的对比分析凸显了我们方法的卓越性能。结果表明,我们的方法显著提高了诊断准确性,分类准确率达到 97.2%,灵敏度达到 84.42%,特异性达到 95.23%。这些结果不仅证明了我们在甲状腺结节分类方面的显著进步,还为 CAD 系统的效率建立了新的标准。
{"title":"Deep Learning and Handcrafted Features for Thyroid Nodule Classification","authors":"Ayoub Abderrazak Maarouf,&nbsp;Hacini meriem,&nbsp;Fella Hachouf","doi":"10.1002/ima.23215","DOIUrl":"https://doi.org/10.1002/ima.23215","url":null,"abstract":"<div>\u0000 \u0000 <p>In this research, we present a refined image-based computer-aided diagnosis (CAD) system for thyroid cancer detection using ultrasound imagery. This system integrates a specialized convolutional neural network (CNN) architecture designed to address the unique aspects of thyroid image datasets. Additionally, it incorporates a novel statistical model that utilizes a two-dimensional random coefficient autoregressive (2D-RCA) method to precisely analyze the textural characteristics of thyroid images, thereby capturing essential texture-related information. The classification framework relies on a composite feature vector that combines deep learning features from the CNN and handcrafted features from the 2D-RCA model, processed through a support vector machine (SVM) algorithm. Our evaluation methodology is structured in three phases: initial assessment of the 2D-RCA features, analysis of the CNN-derived features, and a final evaluation of their combined effect on classification performance. Comparative analyses with well-known networks such as VGG16, VGG19, ResNet50, and AlexNet highlight the superior performance of our approach. The outcomes indicate a significant enhancement in diagnostic accuracy, achieving a classification accuracy of 97.2%, a sensitivity of 84.42%, and a specificity of 95.23%. These results not only demonstrate a notable advancement in the classification of thyroid nodules but also establish a new standard in the efficiency of CAD systems.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 6","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142641632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SDR2Tr-GAN: A Novel Medical Image Fusion Pipeline Based on GAN With SDR2 Module and Transformer Optimization Strategy SDR2Tr-GAN:基于带有 SDR2 模块和变压器优化策略的 GAN 的新型医学图像融合管道
IF 3 4区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-11-08 DOI: 10.1002/ima.23208
Ying Cheng, Xianjin Fang, Zhiri Tang, Zekuan Yu, Linlin Sun, Li Zhu

In clinical practice, radiologists diagnose brain tumors with the help of different magnetic resonance imaging (MRI) sequences and judge the type and grade of brain tumors. It is hard to realize the brain tumor computer-aided diagnosis system only with a single MRI sequence. However, the existing multiple MRI sequence fusion methods have limitations in the enhancement of tumor details. To improve fusion details of multi-modality MRI images, a novel conditional generative adversarial fusion network based on three discriminators and a Staggered Dense Residual2 (SDR2) module, named SDR2Tr-GAN, was proposed in this paper. In the SDR2Tr-GAN network pipeline, the generator consists of an encoder, decoder, and fusion strategy that can enhance the feature representation. SDR2 module is developed with Res2Net into the encoder to extract multi-scale features. In addition, a Multi-Head Spatial/Channel Attention Transformer, as a fusion strategy to strengthen the long-range dependencies of global context information, is integrated into our pipeline. A Mask-based constraint as a novel fusion optimization mechanism was designed, focusing on enhancing salient feature details. The Mask-based constraint utilizes the segmentation mask obtained by the pre-trained Unet and Ground Truth to optimize the training process. Meanwhile, MI and SSIM loss jointly improve the visual perception of images. Extensive experiments were conducted on the public BraTS2021 dataset. The visual and quantitative results demonstrate that the proposed method can simultaneously enhance both global image quality and local texture details in multi-modality MRI images. Besides, our SDR2Tr-GAN outperforms the other state-of-the-art fusion methods regarding subjective and objective evaluation.

在临床实践中,放射科医生借助不同的磁共振成像(MRI)序列诊断脑肿瘤,并判断脑肿瘤的类型和分级。仅靠单一磁共振成像序列很难实现脑肿瘤计算机辅助诊断系统。然而,现有的多核磁共振成像序列融合方法在增强肿瘤细节方面存在局限性。为了改善多模态磁共振成像的融合细节,本文提出了一种基于三个判别器和交错密集残差2(SDR2)模块的新型条件生成对抗融合网络,命名为 SDR2Tr-GAN。在 SDR2Tr-GAN 网络管道中,生成器由编码器、解码器和可增强特征表示的融合策略组成。在编码器中开发了 SDR2 模块和 Res2Net,以提取多尺度特征。此外,我们还在管道中集成了多头空间/信道注意力转换器,作为加强全局上下文信息长距离依赖性的融合策略。我们设计了一种基于掩码的约束机制,作为一种新颖的融合优化机制,重点在于增强突出的特征细节。基于掩码的约束利用预训练 Unet 和地面实况获得的分割掩码来优化训练过程。同时,MI 和 SSIM loss 可共同改善图像的视觉感知。在公开的 BraTS2021 数据集上进行了广泛的实验。视觉和定量结果表明,所提出的方法可以同时提高多模态磁共振成像的整体图像质量和局部纹理细节。此外,在主观和客观评价方面,我们的 SDR2Tr-GAN 优于其他最先进的融合方法。
{"title":"SDR2Tr-GAN: A Novel Medical Image Fusion Pipeline Based on GAN With SDR2 Module and Transformer Optimization Strategy","authors":"Ying Cheng,&nbsp;Xianjin Fang,&nbsp;Zhiri Tang,&nbsp;Zekuan Yu,&nbsp;Linlin Sun,&nbsp;Li Zhu","doi":"10.1002/ima.23208","DOIUrl":"https://doi.org/10.1002/ima.23208","url":null,"abstract":"<div>\u0000 \u0000 <p>In clinical practice, radiologists diagnose brain tumors with the help of different magnetic resonance imaging (MRI) sequences and judge the type and grade of brain tumors. It is hard to realize the brain tumor computer-aided diagnosis system only with a single MRI sequence. However, the existing multiple MRI sequence fusion methods have limitations in the enhancement of tumor details. To improve fusion details of multi-modality MRI images, a novel conditional generative adversarial fusion network based on three discriminators and a Staggered Dense Residual2 (SDR2) module, named SDR2Tr-GAN, was proposed in this paper. In the SDR2Tr-GAN network pipeline, the generator consists of an encoder, decoder, and fusion strategy that can enhance the feature representation. SDR2 module is developed with Res2Net into the encoder to extract multi-scale features. In addition, a Multi-Head Spatial/Channel Attention Transformer, as a fusion strategy to strengthen the long-range dependencies of global context information, is integrated into our pipeline. A Mask-based constraint as a novel fusion optimization mechanism was designed, focusing on enhancing salient feature details. The Mask-based constraint utilizes the segmentation mask obtained by the pre-trained Unet and Ground Truth to optimize the training process. Meanwhile, MI and SSIM loss jointly improve the visual perception of images. Extensive experiments were conducted on the public BraTS2021 dataset. The visual and quantitative results demonstrate that the proposed method can simultaneously enhance both global image quality and local texture details in multi-modality MRI images. Besides, our SDR2Tr-GAN outperforms the other state-of-the-art fusion methods regarding subjective and objective evaluation.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 6","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142641634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hybrid Wavelet-Deep Learning Framework for Fluorescence Microscopy Images Enhancement 荧光显微镜图像增强的混合小波-深度学习框架
IF 3 4区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-11-07 DOI: 10.1002/ima.23212
Francesco Branciforti, Maura Maggiore, Kristen M. Meiburger, Tania Pannellini, Massimo Salvi

Fluorescence microscopy is a powerful tool for visualizing cellular structures, but it faces challenges such as noise, low contrast, and autofluorescence that can hinder accurate image analysis. To address these limitations, we propose a novel hybrid image enhancement method that combines wavelet-based denoising, linear contrast enhancement, and convolutional neural network-based autofluorescence correction. Our automated method employs Haar wavelet transform for noise reduction and a series of adaptive linear transformations for pixel value adjustment, effectively enhancing image quality while preserving crucial details. Furthermore, we introduce a semantic segmentation approach using CNNs to identify and correct autofluorescence in cellular aggregates, enabling targeted mitigation of unwanted background signals. We validate our method using quantitative metrics, such as signal-to-noise ratio (SNR) and peak signal-to-noise ratio (PSNR), demonstrating superior performance compared to both mathematical and deep learning-based techniques. Our method achieves an average SNR improvement of 8.5 dB and a PSNR increase of 4.2 dB compared with the original images, outperforming state-of-the-art methods such as BM3D and CLAHE. Extensive testing on diverse datasets, including publicly available human-derived cardiosphere and fluorescence microscopy images of bovine endothelial cells stained for mitochondria and actin filaments, showcases the flexibility and robustness of our approach across various acquisition conditions and artifacts. The proposed method significantly improves fluorescence microscopy image quality, facilitating more accurate and reliable analysis of cellular structures and processes, with potential applications in biomedical research and clinical diagnostics.

荧光显微镜是可视化细胞结构的强大工具,但它也面临着诸如噪声、低对比度和自发荧光等挑战,这些都会阻碍准确的图像分析。为了解决这些局限性,我们提出了一种新型混合图像增强方法,该方法结合了基于小波的去噪、线性对比度增强和基于卷积神经网络的自荧光校正。我们的自动方法采用哈小波变换进行降噪,并采用一系列自适应线性变换进行像素值调整,在保留关键细节的同时有效提高了图像质量。此外,我们还引入了一种使用 CNN 的语义分割方法,用于识别和校正细胞聚集体中的自发荧光,从而有针对性地减少不必要的背景信号。我们使用信噪比(SNR)和峰值信噪比(PSNR)等定量指标验证了我们的方法,结果表明,与数学技术和基于深度学习的技术相比,我们的方法性能更优。与原始图像相比,我们的方法实现了 8.5 dB 的平均 SNR 提升和 4.2 dB 的 PSNR 提升,优于 BM3D 和 CLAHE 等最先进的方法。在不同的数据集上进行的广泛测试,包括公开的人源性心肌层以及线粒体和肌动蛋白丝染色的牛内皮细胞荧光显微镜图像,展示了我们的方法在各种采集条件和伪影下的灵活性和鲁棒性。所提出的方法大大提高了荧光显微镜图像的质量,有助于对细胞结构和过程进行更准确、更可靠的分析,有望应用于生物医学研究和临床诊断。
{"title":"Hybrid Wavelet-Deep Learning Framework for Fluorescence Microscopy Images Enhancement","authors":"Francesco Branciforti,&nbsp;Maura Maggiore,&nbsp;Kristen M. Meiburger,&nbsp;Tania Pannellini,&nbsp;Massimo Salvi","doi":"10.1002/ima.23212","DOIUrl":"https://doi.org/10.1002/ima.23212","url":null,"abstract":"<p>Fluorescence microscopy is a powerful tool for visualizing cellular structures, but it faces challenges such as noise, low contrast, and autofluorescence that can hinder accurate image analysis. To address these limitations, we propose a novel hybrid image enhancement method that combines wavelet-based denoising, linear contrast enhancement, and convolutional neural network-based autofluorescence correction. Our automated method employs Haar wavelet transform for noise reduction and a series of adaptive linear transformations for pixel value adjustment, effectively enhancing image quality while preserving crucial details. Furthermore, we introduce a semantic segmentation approach using CNNs to identify and correct autofluorescence in cellular aggregates, enabling targeted mitigation of unwanted background signals. We validate our method using quantitative metrics, such as signal-to-noise ratio (SNR) and peak signal-to-noise ratio (PSNR), demonstrating superior performance compared to both mathematical and deep learning-based techniques. Our method achieves an average SNR improvement of 8.5 dB and a PSNR increase of 4.2 dB compared with the original images, outperforming state-of-the-art methods such as BM3D and CLAHE. Extensive testing on diverse datasets, including publicly available human-derived cardiosphere and fluorescence microscopy images of bovine endothelial cells stained for mitochondria and actin filaments, showcases the flexibility and robustness of our approach across various acquisition conditions and artifacts. The proposed method significantly improves fluorescence microscopy image quality, facilitating more accurate and reliable analysis of cellular structures and processes, with potential applications in biomedical research and clinical diagnostics.</p>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 6","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ima.23212","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142641170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Integrating VGG 19 U-Net for Breast Thermogram Segmentation and Hybrid Enhancement With Optimized Classifier Selection: A Novel Approach to Breast Cancer Diagnosis 将 VGG 19 U-Net 与优化的分类器选择相结合,用于乳腺热图分割和混合增强:乳腺癌诊断的新方法
IF 3 4区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-11-03 DOI: 10.1002/ima.23210
A. Arul Edwin Raj, Nabihah Binti Ahmad, S. Ananiah Durai, R. Renugadevi

Early diagnosis of breast cancer is essential for improving patient survival rates and reducing treatment costs. Despite breast thermogram images having high quality, doctors in developing countries often struggle with early diagnosis due to difficulties in interpreting subtle details. Implementing a Computer-Aided Diagnosis (CAD) system can assist doctors in accurately analyzing these details. This article presents an innovative approach to breast cancer diagnosis using thermal images. The proposed method enhances the quality and clarity of relevant features while preserving sharp and curved edges through U-Net-based segmentation for automatic selection of the ROI, advanced hybrid image enhancement techniques, and a machine learning classifier. Subjective analysis compares the processed images with five conventional enhancement techniques, demonstrating the efficiency of the proposed method. The quantitative analysis further validates the effectiveness of the proposed method against five conventional methods using four quality measures. The proposed method achieves superior performance with PSNR of 15.27 for normal and 14.31 for malignant images, AMBE of 6.594 for normal and 7.46 for malignant images, SSIM of 0.829 for normal and 0.80 for malignant images, and DSSIM of 0.084 for normal and 0.14 for malignant images. The classification phase evaluates four classifiers using 13 features from three categories. The Random Forest (RF) classifier with Discrete Wavelet Transform (DWT) based features initially outperformed other classifier features but had limited performance, with accuracy, sensitivity and specificity of 81.8%, 88.8%, and 91%, respectively. To improve this, three categories of features were normalized and converted into two principal components using Principal Component Analysis (PCA) to train the RF classifier, which then showed superior performance with 97.7% accuracy, 96.5% sensitivity, and 98.2% specificity. The dataset utilized in this article is obtained from the Indira Gandhi Centre for Atomic Research (IGCAR), Kalpakkam, India. The entire proposed model is implemented in a Jupyter notebook.

乳腺癌的早期诊断对于提高患者生存率和降低治疗成本至关重要。尽管乳腺热成像图像的质量很高,但发展中国家的医生往往由于难以解读微妙的细节而在早期诊断方面举步维艰。实施计算机辅助诊断(CAD)系统可以帮助医生准确分析这些细节。本文介绍了一种利用热图像诊断乳腺癌的创新方法。所提出的方法通过基于 U-Net 的自动选择 ROI 的分割、先进的混合图像增强技术和机器学习分类器,提高了相关特征的质量和清晰度,同时保留了锐利和弯曲的边缘。主观分析将处理后的图像与五种传统增强技术进行比较,证明了所提方法的效率。定量分析使用四种质量测量方法,进一步验证了建议方法与五种传统方法的有效性。拟议方法取得了卓越的性能,正常图像的 PSNR 为 15.27,恶性图像为 14.31;正常图像的 AMBE 为 6.594,恶性图像为 7.46;正常图像的 SSIM 为 0.829,恶性图像为 0.80;正常图像的 DSSIM 为 0.084,恶性图像为 0.14。分类阶段使用三个类别的 13 个特征对四个分类器进行了评估。基于离散小波变换(DWT)特征的随机森林(RF)分类器最初优于其他分类器特征,但性能有限,准确率、灵敏度和特异性分别为 81.8%、88.8% 和 91%。为了改善这一情况,我们对三类特征进行了归一化处理,并使用主成分分析法(PCA)将其转换为两个主成分来训练射频分类器,结果表明该分类器性能优越,准确率为 97.7%,灵敏度为 96.5%,特异性为 98.2%。本文使用的数据集来自印度卡尔帕卡姆的英迪拉-甘地原子研究中心(IGCAR)。整个拟议模型是在 Jupyter 笔记本中实现的。
{"title":"Integrating VGG 19 U-Net for Breast Thermogram Segmentation and Hybrid Enhancement With Optimized Classifier Selection: A Novel Approach to Breast Cancer Diagnosis","authors":"A. Arul Edwin Raj,&nbsp;Nabihah Binti Ahmad,&nbsp;S. Ananiah Durai,&nbsp;R. Renugadevi","doi":"10.1002/ima.23210","DOIUrl":"https://doi.org/10.1002/ima.23210","url":null,"abstract":"<div>\u0000 \u0000 <p>Early diagnosis of breast cancer is essential for improving patient survival rates and reducing treatment costs. Despite breast thermogram images having high quality, doctors in developing countries often struggle with early diagnosis due to difficulties in interpreting subtle details. Implementing a Computer-Aided Diagnosis (CAD) system can assist doctors in accurately analyzing these details. This article presents an innovative approach to breast cancer diagnosis using thermal images. The proposed method enhances the quality and clarity of relevant features while preserving sharp and curved edges through U-Net-based segmentation for automatic selection of the ROI, advanced hybrid image enhancement techniques, and a machine learning classifier. Subjective analysis compares the processed images with five conventional enhancement techniques, demonstrating the efficiency of the proposed method. The quantitative analysis further validates the effectiveness of the proposed method against five conventional methods using four quality measures. The proposed method achieves superior performance with PSNR of 15.27 for normal and 14.31 for malignant images, AMBE of 6.594 for normal and 7.46 for malignant images, SSIM of 0.829 for normal and 0.80 for malignant images, and DSSIM of 0.084 for normal and 0.14 for malignant images. The classification phase evaluates four classifiers using 13 features from three categories. The Random Forest (RF) classifier with Discrete Wavelet Transform (DWT) based features initially outperformed other classifier features but had limited performance, with accuracy, sensitivity and specificity of 81.8%, 88.8%, and 91%, respectively. To improve this, three categories of features were normalized and converted into two principal components using Principal Component Analysis (PCA) to train the RF classifier, which then showed superior performance with 97.7% accuracy, 96.5% sensitivity, and 98.2% specificity. The dataset utilized in this article is obtained from the Indira Gandhi Centre for Atomic Research (IGCAR), Kalpakkam, India. The entire proposed model is implemented in a Jupyter notebook.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 6","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142573961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DAG-Net: Dual-Branch Attention-Guided Network for Multi-Scale Information Fusion in Lung Nodule Segmentation DAG-Net:用于肺结节分段中多尺度信息融合的双分支注意引导网络
IF 3 4区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-11-02 DOI: 10.1002/ima.23209
Bojie Zhang, Hongqing Zhu, Ziying Wang, Lan Luo, Yang Yu

The development of deep learning has played an increasingly crucial role in assisting medical diagnoses. Lung cancer, as a major disease threatening human health, benefits significantly from the use of auxiliary medical systems to assist in segmenting pulmonary nodules. This approach effectively enhances both the accuracy and speed of diagnosis for physicians, thereby reducing the risk of patient mortality. However, pulmonary nodules are characterized by irregular shapes and a wide range of diameter variations. They often reside amidst blood vessels and various tissue structures, posing significant challenges in designing an automated system for lung nodule segmentation. To address this, we have developed a three-dimensional dual-branch attention-guided network (DAG-Net) for multi-scale information fusion, aimed at segmenting lung nodules of various types and sizes. First, a dual-branch encoding structure is employed to provide the network with prior knowledge about nodule texture information, which aids the network in better identifying different types of lung nodules. Next, we designed a structure to extract global information, which enhances the network's ability to localize lung nodules of different sizes by fusing information from multiple resolutions. Following that, we fused multi-scale information in a parallel structure and used attention mechanisms to guide the network in suppressing the influence of non-nodule regions. Finally, we employed an attention-based structure to guide the network in achieving more accurate segmentation by progressively using high-level semantic information at each layer. Our proposed network achieved a DSC value of 85.6% on the LUNA16 dataset, outperforming state-of-the-art methods, demonstrating the effectiveness of the network.

深度学习的发展在辅助医疗诊断方面发挥着越来越重要的作用。肺癌作为威胁人类健康的重大疾病,使用辅助医疗系统来协助分割肺结节,可使患者受益匪浅。这种方法可有效提高医生诊断的准确性和速度,从而降低病人死亡的风险。然而,肺结节的特点是形状不规则,直径变化范围大。它们通常位于血管和各种组织结构之间,这给设计肺结节自动分割系统带来了巨大挑战。为此,我们开发了一种用于多尺度信息融合的三维双分支注意力引导网络(DAG-Net),旨在分割各种类型和大小的肺结节。首先,采用双分支编码结构为网络提供有关结节纹理信息的先验知识,帮助网络更好地识别不同类型的肺结节。接着,我们设计了一种提取全局信息的结构,通过融合来自多个分辨率的信息,增强了网络定位不同大小肺结节的能力。随后,我们在并行结构中融合了多尺度信息,并利用注意力机制引导网络抑制非结节区域的影响。最后,我们采用了一种基于注意力的结构,通过在每一层逐步使用高级语义信息来引导网络实现更精确的分割。我们提出的网络在 LUNA16 数据集上的 DSC 值达到了 85.6%,超过了最先进的方法,证明了该网络的有效性。
{"title":"DAG-Net: Dual-Branch Attention-Guided Network for Multi-Scale Information Fusion in Lung Nodule Segmentation","authors":"Bojie Zhang,&nbsp;Hongqing Zhu,&nbsp;Ziying Wang,&nbsp;Lan Luo,&nbsp;Yang Yu","doi":"10.1002/ima.23209","DOIUrl":"https://doi.org/10.1002/ima.23209","url":null,"abstract":"<div>\u0000 \u0000 <p>The development of deep learning has played an increasingly crucial role in assisting medical diagnoses. Lung cancer, as a major disease threatening human health, benefits significantly from the use of auxiliary medical systems to assist in segmenting pulmonary nodules. This approach effectively enhances both the accuracy and speed of diagnosis for physicians, thereby reducing the risk of patient mortality. However, pulmonary nodules are characterized by irregular shapes and a wide range of diameter variations. They often reside amidst blood vessels and various tissue structures, posing significant challenges in designing an automated system for lung nodule segmentation. To address this, we have developed a three-dimensional dual-branch attention-guided network (DAG-Net) for multi-scale information fusion, aimed at segmenting lung nodules of various types and sizes. First, a dual-branch encoding structure is employed to provide the network with prior knowledge about nodule texture information, which aids the network in better identifying different types of lung nodules. Next, we designed a structure to extract global information, which enhances the network's ability to localize lung nodules of different sizes by fusing information from multiple resolutions. Following that, we fused multi-scale information in a parallel structure and used attention mechanisms to guide the network in suppressing the influence of non-nodule regions. Finally, we employed an attention-based structure to guide the network in achieving more accurate segmentation by progressively using high-level semantic information at each layer. Our proposed network achieved a DSC value of 85.6% on the LUNA16 dataset, outperforming state-of-the-art methods, demonstrating the effectiveness of the network.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 6","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142565479","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Embedded System-Based Malaria Detection From Blood Smear Images Using Lightweight Deep Learning Model 利用轻量级深度学习模型,基于嵌入式系统从血液涂片图像中检测疟疾
IF 3 4区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-10-29 DOI: 10.1002/ima.23205
Abdus Salam, S. M. Nahid Hasan, Md. Jawadul Karim, Shamim Anower, Md Nahiduzzaman, Muhammad E. H. Chowdhury, M. Murugappan

The disease of malaria, transmitted by female Anopheles mosquitoes, is highly contagious, resulting in numerous deaths across various regions. Microscopic examination of blood cells remains one of the most accurate methods for malaria diagnosis, but it is time-consuming and can produce inaccurate results occasionally. Due to machine learning and deep learning advances in medical diagnosis, improved diagnostic accuracy can now be achieved while costs can be reduced compared to conventional microscopy methods. This work utilizes an open-source dataset with 26 161 blood smear images in RGB for malaria detection. Our preprocessing resized the original dimensions of the images into 64 × 64 due to the limitations in computational complexity in developing embedded systems-based malaria detection. We present a novel embedded system approach using 119 154 trainable parameters in a lightweight 17-layer SqueezeNet model for the automatic detection of malaria. Incredibly, the model is only 1.72 MB in size. An evaluation of the model's performance on the original NIH malaria dataset shows that it has exceptional accuracy, precision, recall, and F1 scores of 96.37%, 95.67%, 97.21%, and 96.44%, respectively. Based on a modified dataset, the results improved further to 99.71% across all metrics. Compared to current deep learning models, our model significantly outperforms them for malaria detection, making it ideal for embedded systems. This model has also been rigorously tested on the Jetson Nano B01 edge device, demonstrating a rapid single image prediction time of only 0.24 s. The fusion of deep learning with embedded systems makes this research a crucial step toward improving malaria diagnosis. In resource-constrained settings, the model's lightweight architecture and accuracy enhancements hold great promise for addressing the critical challenge of malaria detection.

疟疾是由雌性按蚊传播的疾病,具有高度传染性,导致各地无数人死亡。显微镜检查血细胞仍然是诊断疟疾最准确的方法之一,但这种方法耗时较长,偶尔也会产生不准确的结果。由于机器学习和深度学习在医疗诊断领域的进步,现在可以实现更高的诊断准确性,同时与传统的显微镜检查方法相比可以降低成本。这项工作利用了一个开源数据集,其中包含 26 161 张 RGB 血涂片图像,用于疟疾检测。由于在开发基于嵌入式系统的疟疾检测时受到计算复杂度的限制,我们的预处理将图像的原始尺寸调整为 64 × 64。我们提出了一种新颖的嵌入式系统方法,在轻量级 17 层 SqueezeNet 模型中使用 119 154 个可训练参数来自动检测疟疾。令人难以置信的是,该模型的大小仅为 1.72 MB。对模型在原始 NIH 疟疾数据集上的性能进行的评估表明,该模型的准确率、精确率、召回率和 F1 分数分别为 96.37%、95.67%、97.21% 和 96.44%,表现优异。基于修改后的数据集,所有指标的结果进一步提高到 99.71%。与当前的深度学习模型相比,我们的模型在疟疾检测方面明显优于它们,是嵌入式系统的理想选择。该模型还在 Jetson Nano B01 边缘设备上进行了严格测试,结果表明单张图像的快速预测时间仅为 0.24 秒。深度学习与嵌入式系统的融合使这项研究朝着改善疟疾诊断迈出了关键一步。在资源有限的环境下,该模型的轻量级架构和准确性的提高为应对疟疾检测这一关键挑战带来了巨大希望。
{"title":"Embedded System-Based Malaria Detection From Blood Smear Images Using Lightweight Deep Learning Model","authors":"Abdus Salam,&nbsp;S. M. Nahid Hasan,&nbsp;Md. Jawadul Karim,&nbsp;Shamim Anower,&nbsp;Md Nahiduzzaman,&nbsp;Muhammad E. H. Chowdhury,&nbsp;M. Murugappan","doi":"10.1002/ima.23205","DOIUrl":"https://doi.org/10.1002/ima.23205","url":null,"abstract":"<div>\u0000 \u0000 <p>The disease of malaria, transmitted by female Anopheles mosquitoes, is highly contagious, resulting in numerous deaths across various regions. Microscopic examination of blood cells remains one of the most accurate methods for malaria diagnosis, but it is time-consuming and can produce inaccurate results occasionally. Due to machine learning and deep learning advances in medical diagnosis, improved diagnostic accuracy can now be achieved while costs can be reduced compared to conventional microscopy methods. This work utilizes an open-source dataset with 26 161 blood smear images in RGB for malaria detection. Our preprocessing resized the original dimensions of the images into 64 × 64 due to the limitations in computational complexity in developing embedded systems-based malaria detection. We present a novel embedded system approach using 119 154 trainable parameters in a lightweight 17-layer SqueezeNet model for the automatic detection of malaria. Incredibly, the model is only 1.72 MB in size. An evaluation of the model's performance on the original NIH malaria dataset shows that it has exceptional accuracy, precision, recall, and F1 scores of 96.37%, 95.67%, 97.21%, and 96.44%, respectively. Based on a modified dataset, the results improved further to 99.71% across all metrics. Compared to current deep learning models, our model significantly outperforms them for malaria detection, making it ideal for embedded systems. This model has also been rigorously tested on the Jetson Nano B01 edge device, demonstrating a rapid single image prediction time of only 0.24 s. The fusion of deep learning with embedded systems makes this research a crucial step toward improving malaria diagnosis. In resource-constrained settings, the model's lightweight architecture and accuracy enhancements hold great promise for addressing the critical challenge of malaria detection.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 6","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142555477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Advancing Leukocyte Classification: A Cutting-Edge Deep Learning Approach for AI-Driven Clinical Diagnosis 推进白细胞分类:用于人工智能临床诊断的前沿深度学习方法
IF 3 4区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-10-28 DOI: 10.1002/ima.23204
Ahmadsaidulu Shaik, Abhishek Tiwari, Balachakravarthy Neelapu, Puneet Kumar Jain, Earu Banoth

White blood cells (WBCs) are crucial components of the immune system, responsible for detecting and eliminating pathogens. Accurate detection and classification of WBCs are essential for various clinical diagnostics. This study aims to develop an AI framework for detecting and classifying WBCs from microscopic images using a customized YOLOv5 model with three key modifications. Firstly, the C3 module in YOLOv5's backbone is replaced with the innovative C3TR structure to enhance feature extraction and reduce background noise. Secondly, the BiFPN is integrated into the neck to improve feature localization and discrimination. Thirdly, an additional layer in the head enhances detection of small WBCs. Experiments on the BCCD dataset, comprising 352 microscopic blood smear images with leukocytes, demonstrated the framework's superiority over state-of-the-art methods, achieving 99.4% accuracy. Furthermore, the model exhibits computational efficiency, operating over five times faster than existing YOLO models. These findings underscore the framework's promise in medical diagnostics, showcasing deep learning's supremacy in automated cell classification.

白细胞(WBC)是免疫系统的重要组成部分,负责检测和消除病原体。白细胞的准确检测和分类对于各种临床诊断至关重要。本研究旨在开发一个人工智能框架,利用定制的 YOLOv5 模型,通过三处关键修改,从显微图像中检测白细胞并对其进行分类。首先,用创新的 C3TR 结构取代了 YOLOv5 主干网中的 C3 模块,以增强特征提取并减少背景噪音。其次,将 BiFPN 集成到颈部,以提高特征定位和辨别能力。第三,在头部增加了一层,增强了对小白细胞的检测。BCCD 数据集包括 352 幅带有白细胞的显微血液涂片图像,在该数据集上进行的实验表明,该框架优于最先进的方法,准确率达到 99.4%。此外,该模型的计算效率也很高,比现有的 YOLO 模型快五倍以上。这些发现强调了该框架在医疗诊断方面的前景,展示了深度学习在自动细胞分类方面的优势。
{"title":"Advancing Leukocyte Classification: A Cutting-Edge Deep Learning Approach for AI-Driven Clinical Diagnosis","authors":"Ahmadsaidulu Shaik,&nbsp;Abhishek Tiwari,&nbsp;Balachakravarthy Neelapu,&nbsp;Puneet Kumar Jain,&nbsp;Earu Banoth","doi":"10.1002/ima.23204","DOIUrl":"https://doi.org/10.1002/ima.23204","url":null,"abstract":"<div>\u0000 \u0000 <p>White blood cells (WBCs) are crucial components of the immune system, responsible for detecting and eliminating pathogens. Accurate detection and classification of WBCs are essential for various clinical diagnostics. This study aims to develop an AI framework for detecting and classifying WBCs from microscopic images using a customized YOLOv5 model with three key modifications. Firstly, the C3 module in YOLOv5's backbone is replaced with the innovative C3TR structure to enhance feature extraction and reduce background noise. Secondly, the BiFPN is integrated into the neck to improve feature localization and discrimination. Thirdly, an additional layer in the head enhances detection of small WBCs. Experiments on the BCCD dataset, comprising 352 microscopic blood smear images with leukocytes, demonstrated the framework's superiority over state-of-the-art methods, achieving 99.4% accuracy. Furthermore, the model exhibits computational efficiency, operating over five times faster than existing YOLO models. These findings underscore the framework's promise in medical diagnostics, showcasing deep learning's supremacy in automated cell classification.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 6","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142541033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fast-MedNeXt: Accelerating the MedNeXt Architecture to Improve Brain Tumour Segmentation Efficiency Fast-MedNeXt:加速 MedNeXt 架构以提高脑肿瘤分割效率
IF 3 4区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-10-27 DOI: 10.1002/ima.23196
Bin Liu, Bing Li, Yaojing Chen, Victor Sreeram, Shuofeng Li

With the rapid development of medical imaging technology, 3D image segmentation technology has gradually become a mainstream method, especially in brain tumour detection and diagnosis showing its unique advantages. The technique makes full use of 3D spatial information to locate and analyze tumours more accurately, thus playing an important role in improving diagnostic accuracy, optimising treatment planning and promoting research. However, it also suffers from significant computational expenditure and delayed processing pace. In this paper, we propose an innovative optimisation scheme to address this problem. We thoroughly investigate the MedNeXt network and propose Fast-MedNeXt, which aims to increase the processing speed while maintaining accuracy. First, we introduce the partial convolution (PConv) technique, which replaces the deep convolutional layers in the network. This improvement effectively reduces computation and memory requirements while maintaining efficient feature extraction. Second, based on PConv, we propose PConv-Down and PConv-Up modules, which are applied to the up-sampling and down-sampling modules to further optimise the network structure and improve efficiency. To confirm the efficacy of the approach, we carried out a sequence of tests in the multimodal brain tumour segmentation challenge 2021 (BraTS2021). By comparing with the MedNeXt series network, the Fast-MedNeXt reduced the latency by 22.1%, 20.5%, 15.8%, and 11.4% respectively, while the average accuracy also increased by 0.475% and 0.2% respectively. These significant performance improvements demonstrate the effectiveness of Fast-MedNeXt in 3D medical image segmentation tasks and provide a new and more efficient solution for the field.

随着医学影像技术的飞速发展,三维图像分割技术已逐渐成为一种主流方法,尤其在脑肿瘤的检测和诊断方面显示出其独特的优势。该技术充分利用三维空间信息,对肿瘤进行更准确的定位和分析,在提高诊断准确性、优化治疗方案和促进科研方面发挥了重要作用。然而,它也存在计算量巨大、处理速度滞后等问题。本文提出了一种创新的优化方案来解决这一问题。我们对 MedNeXt 网络进行了深入研究,并提出了 Fast-MedNeXt 方案,旨在提高处理速度的同时保持准确性。首先,我们引入了部分卷积(PConv)技术,取代了网络中的深度卷积层。这一改进在保持高效特征提取的同时,有效降低了计算和内存需求。其次,在 PConv 的基础上,我们提出了 PConv-Down 和 PConv-Up 模块,应用于上采样和下采样模块,进一步优化网络结构,提高效率。为了证实该方法的有效性,我们在 2021 年多模态脑肿瘤分割挑战赛(BraTS2021)中进行了一系列测试。与 MedNeXt 系列网络相比,Fast-MedNeXt 的延迟时间分别缩短了 22.1%、20.5%、15.8% 和 11.4%,平均准确率也分别提高了 0.475% 和 0.2%。这些性能的大幅提升证明了 Fast-MedNeXt 在三维医学图像分割任务中的有效性,并为该领域提供了一种全新的、更高效的解决方案。
{"title":"Fast-MedNeXt: Accelerating the MedNeXt Architecture to Improve Brain Tumour Segmentation Efficiency","authors":"Bin Liu,&nbsp;Bing Li,&nbsp;Yaojing Chen,&nbsp;Victor Sreeram,&nbsp;Shuofeng Li","doi":"10.1002/ima.23196","DOIUrl":"https://doi.org/10.1002/ima.23196","url":null,"abstract":"<div>\u0000 \u0000 <p>With the rapid development of medical imaging technology, 3D image segmentation technology has gradually become a mainstream method, especially in brain tumour detection and diagnosis showing its unique advantages. The technique makes full use of 3D spatial information to locate and analyze tumours more accurately, thus playing an important role in improving diagnostic accuracy, optimising treatment planning and promoting research. However, it also suffers from significant computational expenditure and delayed processing pace. In this paper, we propose an innovative optimisation scheme to address this problem. We thoroughly investigate the MedNeXt network and propose Fast-MedNeXt, which aims to increase the processing speed while maintaining accuracy. First, we introduce the partial convolution (PConv) technique, which replaces the deep convolutional layers in the network. This improvement effectively reduces computation and memory requirements while maintaining efficient feature extraction. Second, based on PConv, we propose PConv-Down and PConv-Up modules, which are applied to the up-sampling and down-sampling modules to further optimise the network structure and improve efficiency. To confirm the efficacy of the approach, we carried out a sequence of tests in the multimodal brain tumour segmentation challenge 2021 (BraTS2021). By comparing with the MedNeXt series network, the Fast-MedNeXt reduced the latency by 22.1%, 20.5%, 15.8%, and 11.4% respectively, while the average accuracy also increased by 0.475% and 0.2% respectively. These significant performance improvements demonstrate the effectiveness of Fast-MedNeXt in 3D medical image segmentation tasks and provide a new and more efficient solution for the field.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 6","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142525459","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Novel Dictionary Learning Algorithm Based on Prior Knowledge for fMRI Data Analysis 基于先验知识的新型词典学习算法,用于 fMRI 数据分析
IF 3 4区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-10-27 DOI: 10.1002/ima.23195
Fangmin Sheng, Yuhu Shi, Lei Wang, Ying Li, Hua Zhang, Weiming Zeng

Task-based functional magnetic resonance imaging (fMRI) has been widely utilized for brain activation detection and functional network analysis. In recent years, the K-singular value decomposition (K-SVD) algorithm has gained increasing attention in the research of fMRI data analysis methods. In this study, we propose a novel temporal feature region-growing constrained K-SVD algorithm that incorporates task-based fMRI temporal prior knowledge and utilizes a region-growing algorithm to infer potential activation locations. The algorithm incorporates temporal and spatial constraints to enhance the detection of brain activation. Specifically, this paper improves the three stages of the traditional K-SVD algorithm. First, in the dictionary initialization stage, the automatic target generation process with an independent component analysis algorithm is utilized in conjunction with prior knowledge to enhance the accuracy of initialization. Second, in the sparse coding stage, the region-growing algorithm is employed to infer potential activation locations based on temporal prior knowledge, thereby imposing spatial constraints to limit the extent of activation regions. Finally, in the dictionary learning stage, soft constraints and low correlation constraints are applied to reinforce the consistency with prior knowledge and enhance the robustness of learning for task-related atoms. The proposed method was validated on simulated and real fMRI data, showing superior performance in detecting brain activation compared with traditional methods. The results indicate that the algorithm accurately identifies activated brain regions, providing an effective approach for studying brain function in clinical applications.

基于任务的功能磁共振成像(fMRI)已被广泛用于脑激活检测和功能网络分析。近年来,K-singular 值分解(K-SVD)算法在 fMRI 数据分析方法的研究中越来越受到重视。在本研究中,我们提出了一种新颖的时间特征区域增长约束 K-SVD 算法,该算法结合了基于任务的 fMRI 时间先验知识,并利用区域增长算法来推断潜在的激活位置。该算法结合了时间和空间约束,以增强对大脑激活的检测。具体来说,本文改进了传统 K-SVD 算法的三个阶段。首先,在字典初始化阶段,利用独立成分分析算法的自动目标生成过程,结合先验知识,提高初始化的准确性。其次,在稀疏编码阶段,利用区域增长算法根据时间先验知识推断潜在的激活位置,从而施加空间约束以限制激活区域的范围。最后,在字典学习阶段,应用软约束和低相关性约束来加强与先验知识的一致性,并提高任务相关原子学习的鲁棒性。所提出的方法在模拟和真实的 fMRI 数据上进行了验证,结果表明与传统方法相比,该方法在检测大脑激活方面表现出色。结果表明,该算法能准确识别激活的大脑区域,为临床应用中的大脑功能研究提供了一种有效的方法。
{"title":"A Novel Dictionary Learning Algorithm Based on Prior Knowledge for fMRI Data Analysis","authors":"Fangmin Sheng,&nbsp;Yuhu Shi,&nbsp;Lei Wang,&nbsp;Ying Li,&nbsp;Hua Zhang,&nbsp;Weiming Zeng","doi":"10.1002/ima.23195","DOIUrl":"https://doi.org/10.1002/ima.23195","url":null,"abstract":"<div>\u0000 \u0000 <p>Task-based functional magnetic resonance imaging (fMRI) has been widely utilized for brain activation detection and functional network analysis. In recent years, the K-singular value decomposition (K-SVD) algorithm has gained increasing attention in the research of fMRI data analysis methods. In this study, we propose a novel temporal feature region-growing constrained K-SVD algorithm that incorporates task-based fMRI temporal prior knowledge and utilizes a region-growing algorithm to infer potential activation locations. The algorithm incorporates temporal and spatial constraints to enhance the detection of brain activation. Specifically, this paper improves the three stages of the traditional K-SVD algorithm. First, in the dictionary initialization stage, the automatic target generation process with an independent component analysis algorithm is utilized in conjunction with prior knowledge to enhance the accuracy of initialization. Second, in the sparse coding stage, the region-growing algorithm is employed to infer potential activation locations based on temporal prior knowledge, thereby imposing spatial constraints to limit the extent of activation regions. Finally, in the dictionary learning stage, soft constraints and low correlation constraints are applied to reinforce the consistency with prior knowledge and enhance the robustness of learning for task-related atoms. The proposed method was validated on simulated and real fMRI data, showing superior performance in detecting brain activation compared with traditional methods. The results indicate that the algorithm accurately identifies activated brain regions, providing an effective approach for studying brain function in clinical applications.</p>\u0000 </div>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 6","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142525460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automated, Reproducible, and Reconfigurable Human Head Phantom for Experimental Testing of Microwave Systems for Stroke Classification 用于中风分类微波系统实验测试的自动化、可重现和可重构的人体头部模型
IF 3 4区 计算机科学 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-10-26 DOI: 10.1002/ima.23200
Tomas Pokorny, Tomas Drizdal, Marek Novak, Jan Vrba

Microwave systems for prehospital stroke classification are currently being developed. In the future, these systems should enable rapid recognition of the type of stroke, shorten the time to start treatment, and thus significantly improve the prognosis of patients. In this study, we realized a realistic and reconfigurable 3D human head phantom for the development, testing, and validation of these newly developed diagnostic methods. The phantom enables automated and reproducible measurements for different positions of the stroke model. The stroke model itself is also interchangeable, so measurements can be made for different types, sizes, and shapes of strokes. Furthermore, an extensive series of measurements was performed at a frequency of 1 GHz, and an SVM classification algorithm was deployed, which successfully identified ischemic stroke in 80% of the corresponding measured data. If similar classification accuracy could be achieved in patients, it would lead to a dramatic reduction in the consequences of strokes.

目前正在开发用于院前中风分类的微波系统。未来,这些系统应能快速识别中风类型,缩短开始治疗的时间,从而显著改善患者的预后。在本研究中,我们实现了一个逼真且可重新配置的三维人体头部模型,用于开发、测试和验证这些新开发的诊断方法。该模型可对中风模型的不同位置进行自动和可重复的测量。中风模型本身也可以互换,因此可以对不同类型、大小和形状的中风进行测量。此外,还在 1 GHz 频率下进行了一系列广泛的测量,并采用 SVM 分类算法,在 80% 的相应测量数据中成功识别出缺血性中风。如果能在患者身上实现类似的分类准确性,将大大减少脑卒中的后果。
{"title":"Automated, Reproducible, and Reconfigurable Human Head Phantom for Experimental Testing of Microwave Systems for Stroke Classification","authors":"Tomas Pokorny,&nbsp;Tomas Drizdal,&nbsp;Marek Novak,&nbsp;Jan Vrba","doi":"10.1002/ima.23200","DOIUrl":"https://doi.org/10.1002/ima.23200","url":null,"abstract":"<p>Microwave systems for prehospital stroke classification are currently being developed. In the future, these systems should enable rapid recognition of the type of stroke, shorten the time to start treatment, and thus significantly improve the prognosis of patients. In this study, we realized a realistic and reconfigurable 3D human head phantom for the development, testing, and validation of these newly developed diagnostic methods. The phantom enables automated and reproducible measurements for different positions of the stroke model. The stroke model itself is also interchangeable, so measurements can be made for different types, sizes, and shapes of strokes. Furthermore, an extensive series of measurements was performed at a frequency of 1 GHz, and an SVM classification algorithm was deployed, which successfully identified ischemic stroke in 80% of the corresponding measured data. If similar classification accuracy could be achieved in patients, it would lead to a dramatic reduction in the consequences of strokes.</p>","PeriodicalId":14027,"journal":{"name":"International Journal of Imaging Systems and Technology","volume":"34 6","pages":""},"PeriodicalIF":3.0,"publicationDate":"2024-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/ima.23200","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142525317","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
International Journal of Imaging Systems and Technology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1