首页 > 最新文献

IEEE transactions on medical imaging最新文献

英文 中文
Uni4Eye++: A General Masked Image Modeling Multi-modal Pre-training Framework for Ophthalmic Image Classification and Segmentation. Uni4Eye++:用于眼科图像分类和分割的通用屏蔽图像建模多模态预训练框架
Pub Date : 2024-07-02 DOI: 10.1109/TMI.2024.3422102
Zhiyuan Cai, Li Lin, Huaqing He, Pujin Cheng, Xiaoying Tang

A large-scale labeled dataset is a key factor for the success of supervised deep learning in most ophthalmic image analysis scenarios. However, limited annotated data is very common in ophthalmic image analysis, since manual annotation is time-consuming and labor-intensive. Self-supervised learning (SSL) methods bring huge opportunities for better utilizing unlabeled data, as they do not require massive annotations. To utilize as many unlabeled ophthalmic images as possible, it is necessary to break the dimension barrier, simultaneously making use of both 2D and 3D images as well as alleviating the issue of catastrophic forgetting. In this paper, we propose a universal self-supervised Transformer framework named Uni4Eye++ to discover the intrinsic image characteristic and capture domain-specific feature embedding in ophthalmic images. Uni4Eye++ can serve as a global feature extractor, which builds its basis on a Masked Image Modeling task with a Vision Transformer architecture. On the basis of our previous work Uni4Eye, we further employ an image entropy guided masking strategy to reconstruct more-informative patches and a dynamic head generator module to alleviate modality confusion. We evaluate the performance of our pre-trained Uni4Eye++ encoder by fine-tuning it on multiple downstream ophthalmic image classification and segmentation tasks. The superiority of Uni4Eye++ is successfully established through comparisons to other state-of-the-art SSL pre-training methods. Our code is available at Github1.

在大多数眼科图像分析场景中,大规模标注数据集是有监督深度学习取得成功的关键因素。然而,在眼科图像分析中,标注数据有限的情况非常普遍,因为人工标注既耗时又耗力。自监督学习(SSL)方法不需要大量注释,因此为更好地利用未标注数据带来了巨大的机遇。要利用尽可能多的未标记眼科图像,就必须打破维度障碍,同时利用二维和三维图像,并缓解灾难性遗忘问题。在本文中,我们提出了一个名为 Uni4Eye++ 的通用自监督变换器框架,用于发现眼科图像的内在特征并捕捉特定领域的特征嵌入。Uni4Eye++ 可作为全局特征提取器,其基础是具有视觉变换器架构的遮罩图像建模任务。在之前的 Uni4Eye 工作基础上,我们进一步采用了图像熵引导的遮罩策略来重建信息量更大的补丁,并使用动态头部生成器模块来缓解模态混淆。我们通过在多个下游眼科图像分类和分割任务中对预先训练好的 Uni4Eye++ 编码器进行微调来评估其性能。通过与其他最先进的 SSL 预训练方法进行比较,我们成功地确定了 Uni4Eye++ 的优越性。我们的代码可在 Github 上获取1。
{"title":"Uni4Eye++: A General Masked Image Modeling Multi-modal Pre-training Framework for Ophthalmic Image Classification and Segmentation.","authors":"Zhiyuan Cai, Li Lin, Huaqing He, Pujin Cheng, Xiaoying Tang","doi":"10.1109/TMI.2024.3422102","DOIUrl":"https://doi.org/10.1109/TMI.2024.3422102","url":null,"abstract":"<p><p>A large-scale labeled dataset is a key factor for the success of supervised deep learning in most ophthalmic image analysis scenarios. However, limited annotated data is very common in ophthalmic image analysis, since manual annotation is time-consuming and labor-intensive. Self-supervised learning (SSL) methods bring huge opportunities for better utilizing unlabeled data, as they do not require massive annotations. To utilize as many unlabeled ophthalmic images as possible, it is necessary to break the dimension barrier, simultaneously making use of both 2D and 3D images as well as alleviating the issue of catastrophic forgetting. In this paper, we propose a universal self-supervised Transformer framework named Uni4Eye++ to discover the intrinsic image characteristic and capture domain-specific feature embedding in ophthalmic images. Uni4Eye++ can serve as a global feature extractor, which builds its basis on a Masked Image Modeling task with a Vision Transformer architecture. On the basis of our previous work Uni4Eye, we further employ an image entropy guided masking strategy to reconstruct more-informative patches and a dynamic head generator module to alleviate modality confusion. We evaluate the performance of our pre-trained Uni4Eye++ encoder by fine-tuning it on multiple downstream ophthalmic image classification and segmentation tasks. The superiority of Uni4Eye++ is successfully established through comparisons to other state-of-the-art SSL pre-training methods. Our code is available at Github<sup>1</sup>.</p>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141494635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE Nuclear Science Symposium 电气和电子工程师学会核科学研讨会
Pub Date : 2024-07-01 DOI: 10.1109/TMI.2024.3372492
{"title":"IEEE Nuclear Science Symposium","authors":"","doi":"10.1109/TMI.2024.3372492","DOIUrl":"10.1109/TMI.2024.3372492","url":null,"abstract":"","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10579890","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141489286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Organ-aware Diagnosis Framework for Radiology Report Generation. 用于生成放射报告的器官感知诊断框架
Pub Date : 2024-07-01 DOI: 10.1109/TMI.2024.3421599
Shiyu Li, Pengchong Qiao, Lin Wang, Munan Ning, Li Yuan, Yefeng Zheng, Jie Chen

Radiology report generation (RRG) is crucial to save the valuable time of radiologists in drafting the report, therefore increasing their work efficiency. Compared to typical methods that directly transfer image captioning technologies to RRG, our approach incorporates organ-wise priors into the report generation. Specifically, in this paper, we propose Organ-aware Diagnosis (OaD) to generate diagnostic reports containing descriptions of each physiological organ. During training, we first develop a task distillation (TD) module to extract organ-level descriptions from reports. We then introduce an organ-aware report generation module that, for one thing, provides a specific description for each organ, and for another, simulates clinical situations to provide short descriptions for normal cases. Furthermore, we design an auto-balance mask loss to ensure balanced training for normal/abnormal descriptions and various organs simultaneously. Being intuitively reasonable and practically simple, our OaD outperforms SOTA alternatives by large margins on commonly used IU-Xray and MIMIC-CXR datasets, as evidenced by a 3.4% BLEU-1 improvement on MIMIC-CXR and 2.0% BLEU-2 improvement on IU-Xray.

放射学报告生成(RRG)对于节省放射科医生起草报告的宝贵时间,从而提高他们的工作效率至关重要。与直接将图像标题技术移植到 RRG 的典型方法相比,我们的方法将器官先验纳入了报告生成。具体来说,我们在本文中提出了器官感知诊断(Organ-aware Diagnosis,OaD),以生成包含各生理器官描述的诊断报告。在训练过程中,我们首先开发了一个任务蒸馏(TD)模块,用于从报告中提取器官级描述。然后,我们引入了器官感知报告生成模块,该模块一方面为每个器官提供具体描述,另一方面模拟临床情况,为正常病例提供简短描述。此外,我们还设计了一种自动平衡掩码损失,以确保同时对正常/异常描述和各种器官进行均衡训练。我们的 OaD 直观合理、实用简单,在常用的 IU-Xray 和 MIMIC-CXR 数据集上的表现远远优于 SOTA 替代方案,在 MIMIC-CXR 数据集上的 BLEU-1 提高了 3.4%,在 IU-Xray 数据集上的 BLEU-2 提高了 2.0%。
{"title":"An Organ-aware Diagnosis Framework for Radiology Report Generation.","authors":"Shiyu Li, Pengchong Qiao, Lin Wang, Munan Ning, Li Yuan, Yefeng Zheng, Jie Chen","doi":"10.1109/TMI.2024.3421599","DOIUrl":"https://doi.org/10.1109/TMI.2024.3421599","url":null,"abstract":"<p><p>Radiology report generation (RRG) is crucial to save the valuable time of radiologists in drafting the report, therefore increasing their work efficiency. Compared to typical methods that directly transfer image captioning technologies to RRG, our approach incorporates organ-wise priors into the report generation. Specifically, in this paper, we propose Organ-aware Diagnosis (OaD) to generate diagnostic reports containing descriptions of each physiological organ. During training, we first develop a task distillation (TD) module to extract organ-level descriptions from reports. We then introduce an organ-aware report generation module that, for one thing, provides a specific description for each organ, and for another, simulates clinical situations to provide short descriptions for normal cases. Furthermore, we design an auto-balance mask loss to ensure balanced training for normal/abnormal descriptions and various organs simultaneously. Being intuitively reasonable and practically simple, our OaD outperforms SOTA alternatives by large margins on commonly used IU-Xray and MIMIC-CXR datasets, as evidenced by a 3.4% BLEU-1 improvement on MIMIC-CXR and 2.0% BLEU-2 improvement on IU-Xray.</p>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141478183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-Label Chest X-Ray Image Classification with Single Positive Labels. 使用单阳性标签进行多标签胸部 X 光图像分类
Pub Date : 2024-07-01 DOI: 10.1109/TMI.2024.3421644
Jiayin Xiao, Si Li, Tongxu Lin, Jian Zhu, Xiaochen Yuan, David Dagan Feng, Bin Sheng

Deep learning approaches for multi-label Chest X-ray (CXR) images classification usually require large-scale datasets. However, acquiring such datasets with full annotations is costly, time-consuming, and prone to noisy labels. Therefore, we introduce a weakly supervised learning problem called Single Positive Multi-label Learning (SPML) into CXR images classification (abbreviated as SPML-CXR), in which only one positive label is annotated per image. A simple solution to SPML-CXR problem is to assume that all the unannotated pathological labels are negative, however, it might introduce false negative labels and decrease the model performance. To this end, we present a Multi-level Pseudo-label Consistency (MPC) framework for SPML-CXR. First, inspired by the pseudo-labeling and consistency regularization in semi-supervised learning, we construct a weak-to-strong consistency framework, where the model prediction on weakly-augmented image is treated as the pseudo label for supervising the model prediction on a strongly-augmented version of the same image, and define an Image-level Perturbation-based Consistency (IPC) regularization to recover the potential mislabeled positive labels. Besides, we incorporate Random Elastic Deformation (RED) as an additional strong augmentation to enhance the perturbation. Second, aiming to expand the perturbation space, we design a perturbation stream to the consistency framework at the feature-level and introduce a Feature-level Perturbation-based Consistency (FPC) regularization as a supplement. Third, we design a Transformer-based encoder module to explore the sample relationship within each mini-batch by a Batch-level Transformer-based Correlation (BTC) regularization. Extensive experiments on the CheXpert and MIMIC-CXR datasets have shown the effectiveness of our MPC framework for solving the SPML-CXR problem.

用于多标签胸部 X 光(CXR)图像分类的深度学习方法通常需要大规模数据集。然而,获取这种带有完整注释的数据集成本高、耗时长,而且容易产生噪声标签。因此,我们在 CXR 图像分类(简称 SPML-CXR)中引入了一个弱监督学习问题,称为单正向多标签学习(Single Positive Multi-label Learning,SPML)。解决 SPML-CXR 问题的一个简单方法是假设所有未注释的病理标签都是阴性的,但这可能会引入假阴性标签,降低模型性能。为此,我们提出了 SPML-CXR 的多级伪标签一致性(MPC)框架。首先,受半监督学习中伪标签和一致性正则化的启发,我们构建了一个从弱到强的一致性框架,将弱增量图像上的模型预测视为伪标签,用于监督同一图像强增量版本上的模型预测,并定义了基于图像级扰动的一致性(IPC)正则化来恢复潜在的误贴正标签。此外,我们还加入了随机弹性变形(RED)作为额外的强增强,以增强扰动。其次,为了扩展扰动空间,我们在特征级的一致性框架中设计了扰动流,并引入了基于特征级扰动的一致性(FPC)正则化作为补充。第三,我们设计了一个基于变换器的编码器模块,通过批量级基于变换器的相关性(BTC)正则化来探索每个小批量内的样本关系。在 CheXpert 和 MIMIC-CXR 数据集上进行的大量实验表明,我们的 MPC 框架在解决 SPML-CXR 问题上非常有效。
{"title":"Multi-Label Chest X-Ray Image Classification with Single Positive Labels.","authors":"Jiayin Xiao, Si Li, Tongxu Lin, Jian Zhu, Xiaochen Yuan, David Dagan Feng, Bin Sheng","doi":"10.1109/TMI.2024.3421644","DOIUrl":"https://doi.org/10.1109/TMI.2024.3421644","url":null,"abstract":"<p><p>Deep learning approaches for multi-label Chest X-ray (CXR) images classification usually require large-scale datasets. However, acquiring such datasets with full annotations is costly, time-consuming, and prone to noisy labels. Therefore, we introduce a weakly supervised learning problem called Single Positive Multi-label Learning (SPML) into CXR images classification (abbreviated as SPML-CXR), in which only one positive label is annotated per image. A simple solution to SPML-CXR problem is to assume that all the unannotated pathological labels are negative, however, it might introduce false negative labels and decrease the model performance. To this end, we present a Multi-level Pseudo-label Consistency (MPC) framework for SPML-CXR. First, inspired by the pseudo-labeling and consistency regularization in semi-supervised learning, we construct a weak-to-strong consistency framework, where the model prediction on weakly-augmented image is treated as the pseudo label for supervising the model prediction on a strongly-augmented version of the same image, and define an Image-level Perturbation-based Consistency (IPC) regularization to recover the potential mislabeled positive labels. Besides, we incorporate Random Elastic Deformation (RED) as an additional strong augmentation to enhance the perturbation. Second, aiming to expand the perturbation space, we design a perturbation stream to the consistency framework at the feature-level and introduce a Feature-level Perturbation-based Consistency (FPC) regularization as a supplement. Third, we design a Transformer-based encoder module to explore the sample relationship within each mini-batch by a Batch-level Transformer-based Correlation (BTC) regularization. Extensive experiments on the CheXpert and MIMIC-CXR datasets have shown the effectiveness of our MPC framework for solving the SPML-CXR problem.</p>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141478185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LCGNet: Local Sequential Feature Coupling Global Representation Learning for Functional Connectivity Network Analysis with fMRI. LCGNet:利用 fMRI 进行功能连接网络分析的局部序列特征耦合全局表征学习。
Pub Date : 2024-07-01 DOI: 10.1109/TMI.2024.3421360
Jie Zhou, Biao Jie, Zhengdong Wang, Zhixiang Zhang, Tongchun Du, Weixin Bian, Yang Yang, Jun Jia

Analysis of functional connectivity networks (FCNs) derived from resting-state functional magnetic resonance imaging (rs-fMRI) has greatly advanced our understanding of brain diseases, including Alzheimer's disease (AD) and attention deficit hyperactivity disorder (ADHD). Advanced machine learning techniques, such as convolutional neural networks (CNNs), have been used to learn high-level feature representations of FCNs for automated brain disease classification. Even though convolution operations in CNNs are good at extracting local properties of FCNs, they generally cannot well capture global temporal representations of FCNs. Recently, the transformer technique has demonstrated remarkable performance in various tasks, which is attributed to its effective self-attention mechanism in capturing the global temporal feature representations. However, it cannot effectively model the local network characteristics of FCNs. To this end, in this paper, we propose a novel network structure for Local sequential feature Coupling Global representation learning (LCGNet) to take advantage of convolutional operations and self-attention mechanisms for enhanced FCN representation learning. Specifically, we first build a dynamic FCN for each subject using an overlapped sliding window approach. We then construct three sequential components (i.e., edge-to-vertex layer, vertex-to-network layer, and network-to-temporality layer) with a dual backbone branch of CNN and transformer to extract and couple from local to global topological information of brain networks. Experimental results on two real datasets (i.e., ADNI and ADHD-200) with rs-fMRI data show the superiority of our LCGNet.

对静息态功能磁共振成像(rs-fMRI)得出的功能连接网络(FCN)进行分析,极大地促进了我们对阿尔茨海默病(AD)和注意缺陷多动障碍(ADHD)等脑部疾病的了解。先进的机器学习技术,如卷积神经网络(CNN),已被用于学习 FCN 的高级特征表征,以实现脑部疾病的自动分类。尽管卷积神经网络中的卷积运算能很好地提取 FCN 的局部属性,但通常不能很好地捕捉 FCN 的全局时间表示。最近,变换器技术在各种任务中表现出了不俗的性能,这归功于它在捕捉全局时间特征表征方面有效的自我注意机制。然而,它无法有效模拟 FCN 的局部网络特征。为此,我们在本文中提出了一种用于局部序列特征耦合全局表征学习(LCGNet)的新型网络结构,以利用卷积运算和自注意机制来增强 FCN 表征学习。具体来说,我们首先使用重叠滑动窗口方法为每个受试者构建一个动态 FCN。然后,我们利用 CNN 的双主干分支和转换器构建了三个连续组件(即边缘到顶点层、顶点到网络层和网络到时序层),以提取和耦合大脑网络的局部到全局拓扑信息。在两个真实数据集(即 ADNI 和 ADHD-200)的 rs-fMRI 数据上的实验结果表明了我们的 LCGNet 的优越性。
{"title":"LCGNet: Local Sequential Feature Coupling Global Representation Learning for Functional Connectivity Network Analysis with fMRI.","authors":"Jie Zhou, Biao Jie, Zhengdong Wang, Zhixiang Zhang, Tongchun Du, Weixin Bian, Yang Yang, Jun Jia","doi":"10.1109/TMI.2024.3421360","DOIUrl":"https://doi.org/10.1109/TMI.2024.3421360","url":null,"abstract":"<p><p>Analysis of functional connectivity networks (FCNs) derived from resting-state functional magnetic resonance imaging (rs-fMRI) has greatly advanced our understanding of brain diseases, including Alzheimer's disease (AD) and attention deficit hyperactivity disorder (ADHD). Advanced machine learning techniques, such as convolutional neural networks (CNNs), have been used to learn high-level feature representations of FCNs for automated brain disease classification. Even though convolution operations in CNNs are good at extracting local properties of FCNs, they generally cannot well capture global temporal representations of FCNs. Recently, the transformer technique has demonstrated remarkable performance in various tasks, which is attributed to its effective self-attention mechanism in capturing the global temporal feature representations. However, it cannot effectively model the local network characteristics of FCNs. To this end, in this paper, we propose a novel network structure for Local sequential feature Coupling Global representation learning (LCGNet) to take advantage of convolutional operations and self-attention mechanisms for enhanced FCN representation learning. Specifically, we first build a dynamic FCN for each subject using an overlapped sliding window approach. We then construct three sequential components (i.e., edge-to-vertex layer, vertex-to-network layer, and network-to-temporality layer) with a dual backbone branch of CNN and transformer to extract and couple from local to global topological information of brain networks. Experimental results on two real datasets (i.e., ADNI and ADHD-200) with rs-fMRI data show the superiority of our LCGNet.</p>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141478184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards human-scale magnetic particle imaging: development of the first system with superconductor-based selection coils. 实现人体尺度的磁粉成像:开发首个基于超导体选择线圈的系统。
Pub Date : 2024-06-26 DOI: 10.1109/TMI.2024.3419427
Tuan-Anh Le, Minh Phu Bui, Yaser Hadadian, Khaled Mohamed Gadelmowla, Seungjun Oh, Chaemin Im, Seungyong Hahn, Jungwon Yoon

Magnetic Particle Imaging (MPI) is an emerging tomographic modality that allows for precise three-dimensional (3D) mapping of magnetic nanoparticles (MNPs) concentration and distribution. Although significant progress has been made towards improving MPI since its introduction, scaling it up for human applications has proven challenging. High-quality images have been obtained in animal-scale MPI scanners with gradients up to 7 T/m/μ0, however, for MPI systems with bore diameters around 200 mm the gradients generated by electromagnets drop significantly to below 0.5 T/m/μ0. Given the current technological limitations in image reconstruction and the properties of available MNPs, these low gradients inherently impose limitations on improving MPI resolution for higher precision medical imaging. Utilizing superconductors stands out as a promising approach for developing a human-scale MPI system. In this study, we introduce, for the first time, a human-scale amplitude-modulated (AM) MPI system with superconductor-based selection coils. The system achieves an unprecedented magnetic field gradient of up to 2.5 T/m/μ0 within a 200 mm bore diameter, enabling large fields of view of 100 × 130 × 98 mm3 at 2.5 T/m/μ0 for 3D imaging. While obtained spatial resolution is in the order of previous animal-scale AM MPIs, incorporating superconductors for achieving such high gradients in a 200 mm bore diameter marks a major step toward clinical MPI.

磁性粒子成像(MPI)是一种新兴的断层成像模式,可对磁性纳米粒子(MNPs)的浓度和分布进行精确的三维(3D)绘图。虽然 MPI 自推出以来在改进方面取得了重大进展,但将其推广到人类应用中仍具有挑战性。动物规模的 MPI 扫描仪可获得梯度高达 7 T/m/μ0 的高质量图像,但对于孔径在 200 毫米左右的 MPI 系统,电磁铁产生的梯度明显降低到 0.5 T/m/μ0 以下。鉴于目前图像重建的技术限制和现有 MNP 的特性,这些低梯度对提高 MPI 分辨率以实现更高精度的医学成像造成了固有的限制。利用超导体是开发人体级 MPI 系统的一种可行方法。在本研究中,我们首次引入了一个人体级调幅(AM)MPI 系统,该系统采用了基于超导体的选择线圈。该系统在 200 毫米的孔径内实现了前所未有的高达 2.5 T/m/μ0 的磁场梯度,从而在 2.5 T/m/μ0 的条件下实现了 100 × 130 × 98 立方毫米的大视野三维成像。虽然所获得的空间分辨率与以前的动物级 AM MPI 相差无几,但在 200 毫米孔径内采用超导体实现如此高的梯度,标志着向临床 MPI 迈出了重要一步。
{"title":"Towards human-scale magnetic particle imaging: development of the first system with superconductor-based selection coils.","authors":"Tuan-Anh Le, Minh Phu Bui, Yaser Hadadian, Khaled Mohamed Gadelmowla, Seungjun Oh, Chaemin Im, Seungyong Hahn, Jungwon Yoon","doi":"10.1109/TMI.2024.3419427","DOIUrl":"https://doi.org/10.1109/TMI.2024.3419427","url":null,"abstract":"<p><p>Magnetic Particle Imaging (MPI) is an emerging tomographic modality that allows for precise three-dimensional (3D) mapping of magnetic nanoparticles (MNPs) concentration and distribution. Although significant progress has been made towards improving MPI since its introduction, scaling it up for human applications has proven challenging. High-quality images have been obtained in animal-scale MPI scanners with gradients up to 7 T/m/μ<sub>0</sub>, however, for MPI systems with bore diameters around 200 mm the gradients generated by electromagnets drop significantly to below 0.5 T/m/μ<sub>0</sub>. Given the current technological limitations in image reconstruction and the properties of available MNPs, these low gradients inherently impose limitations on improving MPI resolution for higher precision medical imaging. Utilizing superconductors stands out as a promising approach for developing a human-scale MPI system. In this study, we introduce, for the first time, a human-scale amplitude-modulated (AM) MPI system with superconductor-based selection coils. The system achieves an unprecedented magnetic field gradient of up to 2.5 T/m/μ<sub>0</sub> within a 200 mm bore diameter, enabling large fields of view of 100 × 130 × 98 mm<sup>3</sup> at 2.5 T/m/μ<sub>0</sub> for 3D imaging. While obtained spatial resolution is in the order of previous animal-scale AM MPIs, incorporating superconductors for achieving such high gradients in a 200 mm bore diameter marks a major step toward clinical MPI.</p>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141461400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HiCervix: An Extensive Hierarchical Dataset and Benchmark for Cervical Cytology Classification. HiCervix:宫颈细胞学分类的广泛分层数据集和基准。
Pub Date : 2024-06-26 DOI: 10.1109/TMI.2024.3419697
De Cai, Jie Chen, Junhan Zhao, Yuan Xue, Sen Yang, Wei Yuan, Min Feng, Haiyan Weng, Shuguang Liu, Yulong Peng, Junyou Zhu, Kanran Wang, Christopher Jackson, Hongping Tang, Junzhou Huang, Xiyue Wang

Cervical cytology is a critical screening strategy for early detection of pre-cancerous and cancerous cervical lesions. The challenge lies in accurately classifying various cervical cytology cell types. Existing automated cervical cytology methods are primarily trained on databases covering a narrow range of coarse-grained cell types, which fail to provide a comprehensive and detailed performance analysis that accurately represents real-world cytopathology conditions. To overcome these limitations, we introduce HiCervix, the most extensive, multi-center cervical cytology dataset currently available to the public. HiCervix includes 40,229 cervical cells from 4,496 whole slide images, categorized into 29 annotated classes. These classes are organized within a three-level hierarchical tree to capture fine-grained subtype information. To exploit the semantic correlation inherent in this hierarchical tree, we propose HierSwin, a hierarchical vision transformer-based classification network. HierSwin serves as a benchmark for detailed feature learning in both coarse-level and fine-level cervical cancer classification tasks. In our comprehensive experiments, HierSwin demonstrated remarkable performance, achieving 92.08% accuracy for coarse-level classification and 82.93% accuracy averaged across all three levels. When compared to board-certified cytopathologists, HierSwin achieved high classification performance (0.8293 versus 0.7359 averaged accuracy), highlighting its potential for clinical applications. This newly released HiCervix dataset, along with our benchmark HierSwin method, is poised to make a substantial impact on the advancement of deep learning algorithms for rapid cervical cancer screening and greatly improve cancer prevention and patient outcomes in real-world clinical settings.

宫颈细胞学检查是早期发现宫颈癌前病变和癌变的重要筛查策略。难点在于如何对各种宫颈细胞学细胞类型进行准确分类。现有的自动宫颈细胞学检查方法主要是在覆盖范围较窄的粗粒度细胞类型数据库中进行训练,无法提供全面详细的性能分析,准确反映真实世界的细胞病理学状况。为了克服这些局限性,我们引入了 HiCervix,这是目前可供公众使用的最广泛的多中心宫颈细胞学数据集。HiCervix 包括来自 4,496 张全切片图像的 40,229 个宫颈细胞,分为 29 个注释类别。这些类别以三级分层树的形式组织起来,以捕捉细粒度的亚型信息。为了利用分层树中固有的语义相关性,我们提出了基于分层视觉转换器的分类网络 HierSwin。HierSwin 可作为粗粒度和细粒度宫颈癌分类任务中详细特征学习的基准。在我们的综合实验中,HierSwin 表现出色,粗分类准确率达到 92.08%,三级平均准确率达到 82.93%。与经过认证的细胞病理学家相比,HierSwin 实现了较高的分类性能(0.8293 对 0.7359 的平均准确率),凸显了其在临床应用方面的潜力。新发布的 HiCervix 数据集与我们的基准 HierSwin 方法一起,有望对用于快速宫颈癌筛查的深度学习算法的发展产生重大影响,并大大改善现实世界临床环境中的癌症预防和患者预后。
{"title":"HiCervix: An Extensive Hierarchical Dataset and Benchmark for Cervical Cytology Classification.","authors":"De Cai, Jie Chen, Junhan Zhao, Yuan Xue, Sen Yang, Wei Yuan, Min Feng, Haiyan Weng, Shuguang Liu, Yulong Peng, Junyou Zhu, Kanran Wang, Christopher Jackson, Hongping Tang, Junzhou Huang, Xiyue Wang","doi":"10.1109/TMI.2024.3419697","DOIUrl":"https://doi.org/10.1109/TMI.2024.3419697","url":null,"abstract":"<p><p>Cervical cytology is a critical screening strategy for early detection of pre-cancerous and cancerous cervical lesions. The challenge lies in accurately classifying various cervical cytology cell types. Existing automated cervical cytology methods are primarily trained on databases covering a narrow range of coarse-grained cell types, which fail to provide a comprehensive and detailed performance analysis that accurately represents real-world cytopathology conditions. To overcome these limitations, we introduce HiCervix, the most extensive, multi-center cervical cytology dataset currently available to the public. HiCervix includes 40,229 cervical cells from 4,496 whole slide images, categorized into 29 annotated classes. These classes are organized within a three-level hierarchical tree to capture fine-grained subtype information. To exploit the semantic correlation inherent in this hierarchical tree, we propose HierSwin, a hierarchical vision transformer-based classification network. HierSwin serves as a benchmark for detailed feature learning in both coarse-level and fine-level cervical cancer classification tasks. In our comprehensive experiments, HierSwin demonstrated remarkable performance, achieving 92.08% accuracy for coarse-level classification and 82.93% accuracy averaged across all three levels. When compared to board-certified cytopathologists, HierSwin achieved high classification performance (0.8293 versus 0.7359 averaged accuracy), highlighting its potential for clinical applications. This newly released HiCervix dataset, along with our benchmark HierSwin method, is poised to make a substantial impact on the advancement of deep learning algorithms for rapid cervical cancer screening and greatly improve cancer prevention and patient outcomes in real-world clinical settings.</p>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141461398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PtbNet: Based on Local Few-Shot Classes and Small Objects to accurately detect PTB. PtbNet:基于本地少拍类和小物体,准确检测 PTB。
Pub Date : 2024-06-26 DOI: 10.1109/TMI.2024.3419134
Wenhui Yang, Shuo Gao, Hao Zhang, Hong Yu, Menglei Xu, Puimun Chong, Weijie Zhang, Hong Wang, Wenjuan Zhang, Airong Qian

Pulmonary Tuberculosis (PTB) is one of the world's most infectious illnesses, and its early detection is critical for preventing PTB. Digital Radiography (DR) has been the most common and effective technique to examine PTB. However, due to the variety and weak specificity of phenotypes on DR chest X-ray (DCR), it is difficult to make reliable diagnoses for radiologists. Although artificial intelligence technology has made considerable gains in assisting the diagnosis of PTB, it lacks methods to identify the lesions of PTB with few-shot classes and small objects. To solve these problems, geometric data augmentation was used to increase the size of the DCRs. For this purpose, a diffusion probability model was implemented for six few-shot classes. Importantly, we propose a new multi-lesion detector PtbNet based on RetinaNet, which was constructed to detect small objects of PTB lesions. The results showed that by two data augmentations, the number of DCRs increased by 80% from 570 to 2,859. In the pre-evaluation experiments with the baseline, RetinaNet, the AP improved by 9.9 for six few-shot classes. Our extensive empirical evaluation showed that the AP of PtbNet achieved 28.2, outperforming the other 9 state-of-the-art methods. In the ablation study, combined with BiFPN+ and PSPD-Conv, the AP increased by 2.1, APs increased by 5.0, and grew by an average of 9.8 in APm and APl. In summary, PtbNet not only improves the detection of small-object lesions but also enhances the ability to detect different types of PTB uniformly, which helps physicians diagnose PTB lesions accurately. The code is available at https://github.com/Wenhui-person/PtbNet/tree/master.

肺结核(PTB)是世界上最具传染性的疾病之一,早期发现对于预防肺结核至关重要。数字射线摄影(DR)一直是检查肺结核最常用、最有效的技术。然而,由于 DR 胸部 X 光片(DCR)上的表型种类繁多且特异性较弱,放射科医生很难做出可靠的诊断。虽然人工智能技术在辅助诊断肺结核方面取得了相当大的进展,但它缺乏识别肺结核病变的方法,因为肺结核的病变类型较少,且病变物体较小。为了解决这些问题,我们采用了几何数据增强技术来增加 DCR 的大小。为此,我们采用了一个扩散概率模型,用于识别六种少镜头类别。重要的是,我们在 RetinaNet 的基础上提出了一种新的多病灶检测器 PtbNet,用于检测 PTB 病灶的小物体。结果显示,通过两次数据增强,DCR 的数量增加了 80%,从 570 个增加到 2859 个。在与基线 RetinaNet 进行的预评估实验中,6 个少镜头类别的 AP 提高了 9.9。我们广泛的经验评估表明,PtbNet 的 AP 达到了 28.2,超过了其他 9 种最先进的方法。在消融研究中,结合 BiFPN+ 和 PSPD-Conv,AP 增加了 2.1,APs 增加了 5.0,APm 和 APl 平均增加了 9.8。总之,PtbNet 不仅提高了小物体病变的检测能力,还增强了统一检测不同类型 PTB 的能力,有助于医生准确诊断 PTB 病变。代码见 https://github.com/Wenhui-person/PtbNet/tree/master。
{"title":"PtbNet: Based on Local Few-Shot Classes and Small Objects to accurately detect PTB.","authors":"Wenhui Yang, Shuo Gao, Hao Zhang, Hong Yu, Menglei Xu, Puimun Chong, Weijie Zhang, Hong Wang, Wenjuan Zhang, Airong Qian","doi":"10.1109/TMI.2024.3419134","DOIUrl":"https://doi.org/10.1109/TMI.2024.3419134","url":null,"abstract":"<p><p>Pulmonary Tuberculosis (PTB) is one of the world's most infectious illnesses, and its early detection is critical for preventing PTB. Digital Radiography (DR) has been the most common and effective technique to examine PTB. However, due to the variety and weak specificity of phenotypes on DR chest X-ray (DCR), it is difficult to make reliable diagnoses for radiologists. Although artificial intelligence technology has made considerable gains in assisting the diagnosis of PTB, it lacks methods to identify the lesions of PTB with few-shot classes and small objects. To solve these problems, geometric data augmentation was used to increase the size of the DCRs. For this purpose, a diffusion probability model was implemented for six few-shot classes. Importantly, we propose a new multi-lesion detector PtbNet based on RetinaNet, which was constructed to detect small objects of PTB lesions. The results showed that by two data augmentations, the number of DCRs increased by 80% from 570 to 2,859. In the pre-evaluation experiments with the baseline, RetinaNet, the AP improved by 9.9 for six few-shot classes. Our extensive empirical evaluation showed that the AP of PtbNet achieved 28.2, outperforming the other 9 state-of-the-art methods. In the ablation study, combined with BiFPN+ and PSPD-Conv, the AP increased by 2.1, AP<sup>s</sup> increased by 5.0, and grew by an average of 9.8 in AP<sup>m</sup> and AP<sup>l</sup>. In summary, PtbNet not only improves the detection of small-object lesions but also enhances the ability to detect different types of PTB uniformly, which helps physicians diagnose PTB lesions accurately. The code is available at https://github.com/Wenhui-person/PtbNet/tree/master.</p>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141461399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Accurate Airway Tree Segmentation in CT Scans via Anatomy-aware Multi-class Segmentation and Topology-guided Iterative Learning. 通过解剖学感知的多类分割和拓扑学指导的迭代学习在 CT 扫描中准确分割气道树
Pub Date : 2024-06-26 DOI: 10.1109/TMI.2024.3419707
Puyang Wang, Dazhou Guo, Dandan Zheng, Minghui Zhang, Haogang Yu, Xin Sun, Jia Ge, Yun Gu, Le Lu, Xianghua Ye, Dakai Jin

Intrathoracic airway segmentation in computed tomography is a prerequisite for various respiratory disease analyses such as chronic obstructive pulmonary disease, asthma and lung cancer. Due to the low imaging contrast and noises execrated at peripheral branches, the topological-complexity and the intra-class imbalance of airway tree, it remains challenging for deep learning-based methods to segment the complete airway tree (on extracting deeper branches). Unlike other organs with simpler shapes or topology, the airway's complex tree structure imposes an unbearable burden to generate the "ground truth" label (up to 7 or 3 hours of manual or semi-automatic annotation per case). Most of the existing airway datasets are incompletely labeled/annotated, thus limiting the completeness of computer-segmented airway. In this paper, we propose a new anatomy-aware multi-class airway segmentation method enhanced by topology-guided iterative self-learning. Based on the natural airway anatomy, we formulate a simple yet highly effective anatomy-aware multi-class segmentation task to intuitively handle the severe intra-class imbalance of the airway. To solve the incomplete labeling issue, we propose a tailored iterative self-learning scheme to segment toward the complete airway tree. For generating pseudo-labels to achieve higher sensitivity (while retaining similar specificity), we introduce a novel breakage attention map and design a topology-guided pseudo-label refinement method by iteratively connecting breaking branches commonly existed from initial pseudo-labels. Extensive experiments have been conducted on four datasets including two public challenges. The proposed method achieves the top performance in both EXACT'09 challenge using average score and ATM'22 challenge on weighted average score. In a public BAS dataset and a private lung cancer dataset, our method significantly improves previous leading approaches by extracting at least (absolute) 6.1% more detected tree length and 5.2% more tree branches, while maintaining comparable precision.

计算机断层扫描中的胸腔内气道分割是各种呼吸系统疾病(如慢性阻塞性肺病、哮喘和肺癌)分析的先决条件。由于气道树的成像对比度低、外围分支噪音大、拓扑复杂和类内不平衡,基于深度学习的方法要分割完整的气道树(提取更深的分支)仍然具有挑战性。与其他形状或拓扑结构较为简单的器官不同,气道复杂的树状结构给生成 "地面实况 "标签带来了难以承受的负担(每个病例的人工或半自动标注时间长达 7 或 3 个小时)。现有的气道数据集大多标注/注释不完整,从而限制了计算机气道分割的完整性。在本文中,我们提出了一种新的解剖感知多类气道分割方法,该方法通过拓扑学引导的迭代自学习得到增强。基于自然气道解剖学,我们制定了一个简单而高效的解剖感知多类分割任务,直观地处理气道严重的类内不平衡问题。为了解决标记不完整的问题,我们提出了一种量身定制的迭代自学习方案,以分割出完整的气道树。为了生成伪标签以实现更高的灵敏度(同时保留相似的特异性),我们引入了一种新颖的断裂注意图,并设计了一种拓扑引导的伪标签完善方法,通过迭代连接初始伪标签中普遍存在的断裂分支来实现。我们在包括两个公开挑战赛在内的四个数据集上进行了广泛的实验。所提出的方法在 EXACT'09 挑战赛(使用平均分)和 ATM'22 挑战赛(使用加权平均分)中都取得了优异成绩。在一个公共 BAS 数据集和一个私人肺癌数据集中,我们的方法显著改进了之前的领先方法,至少(绝对)多提取了 6.1% 的检测树长度和 5.2% 的树枝,同时保持了相当的精确度。
{"title":"Accurate Airway Tree Segmentation in CT Scans via Anatomy-aware Multi-class Segmentation and Topology-guided Iterative Learning.","authors":"Puyang Wang, Dazhou Guo, Dandan Zheng, Minghui Zhang, Haogang Yu, Xin Sun, Jia Ge, Yun Gu, Le Lu, Xianghua Ye, Dakai Jin","doi":"10.1109/TMI.2024.3419707","DOIUrl":"https://doi.org/10.1109/TMI.2024.3419707","url":null,"abstract":"<p><p>Intrathoracic airway segmentation in computed tomography is a prerequisite for various respiratory disease analyses such as chronic obstructive pulmonary disease, asthma and lung cancer. Due to the low imaging contrast and noises execrated at peripheral branches, the topological-complexity and the intra-class imbalance of airway tree, it remains challenging for deep learning-based methods to segment the complete airway tree (on extracting deeper branches). Unlike other organs with simpler shapes or topology, the airway's complex tree structure imposes an unbearable burden to generate the \"ground truth\" label (up to 7 or 3 hours of manual or semi-automatic annotation per case). Most of the existing airway datasets are incompletely labeled/annotated, thus limiting the completeness of computer-segmented airway. In this paper, we propose a new anatomy-aware multi-class airway segmentation method enhanced by topology-guided iterative self-learning. Based on the natural airway anatomy, we formulate a simple yet highly effective anatomy-aware multi-class segmentation task to intuitively handle the severe intra-class imbalance of the airway. To solve the incomplete labeling issue, we propose a tailored iterative self-learning scheme to segment toward the complete airway tree. For generating pseudo-labels to achieve higher sensitivity (while retaining similar specificity), we introduce a novel breakage attention map and design a topology-guided pseudo-label refinement method by iteratively connecting breaking branches commonly existed from initial pseudo-labels. Extensive experiments have been conducted on four datasets including two public challenges. The proposed method achieves the top performance in both EXACT'09 challenge using average score and ATM'22 challenge on weighted average score. In a public BAS dataset and a private lung cancer dataset, our method significantly improves previous leading approaches by extracting at least (absolute) 6.1% more detected tree length and 5.2% more tree branches, while maintaining comparable precision.</p>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141461397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Temporal Dynamic Synchronous Functional Brain Network for Schizophrenia Classification and Lateralization Analysis. 用于精神分裂症分类和侧化分析的时态动态同步功能脑网络
Pub Date : 2024-06-25 DOI: 10.1109/TMI.2024.3419041
Cheng Zhu, Ying Tan, Shuqi Yang, Jiaqing Miao, Jiayi Zhu, Huan Huang, Dezhong Yao, Cheng Luo

Available evidence suggests that dynamic functional connectivity can capture time-varying abnormalities in brain activity in resting-state cerebral functional magnetic resonance imaging (rs-fMRI) data and has a natural advantage in uncovering mechanisms of abnormal brain activity in schizophrenia (SZ) patients. Hence, an advanced dynamic brain network analysis model called the temporal brain category graph convolutional network (Temporal-BCGCN) was employed. Firstly, a unique dynamic brain network analysis module, DSF-BrainNet, was designed to construct dynamic synchronization features. Subsequently, a revolutionary graph convolution method, TemporalConv, was proposed based on the synchronous temporal properties of features. Finally, the first modular test tool for abnormal hemispherical lateralization in deep learning based on rs-fMRI data, named CategoryPool, was proposed. This study was validated on COBRE and UCLA datasets and achieved 83.62% and 89.71% average accuracies, respectively, outperforming the baseline model and other state-of-the-art methods. The ablation results also demonstrate the advantages of TemporalConv over the traditional edge feature graph convolution approach and the improvement of CategoryPool over the classical graph pooling approach. Interestingly, this study showed that the lower-order perceptual system and higher-order network regions in the left hemisphere are more severely dysfunctional than in the right hemisphere in SZ, reaffirmings the importance of the left medial superior frontal gyrus in SZ. Our code was available at: https://github.com/swfen/Temporal-BCGCN.

现有证据表明,动态功能连接可以捕捉静息态脑功能磁共振成像(rs-fMRI)数据中大脑活动的时变异常,在揭示精神分裂症(SZ)患者大脑活动异常机制方面具有天然优势。因此,我们采用了一种先进的动态脑网络分析模型--时空脑类别图卷积网络(Temporal-BCGCN)。首先,设计了一个独特的动态脑网络分析模块--DSF-BrainNet,用于构建动态同步特征。随后,基于特征的同步时间属性,提出了一种革命性的图卷积方法 TemporalConv。最后,提出了第一个基于 rs-fMRI 数据的深度学习异常半球侧化模块化测试工具,名为 CategoryPool。这项研究在 COBRE 和 UCLA 数据集上进行了验证,平均准确率分别达到 83.62% 和 89.71%,优于基线模型和其他最先进的方法。消融结果还证明了 TemporalConv 相对于传统边缘特征图卷积方法的优势,以及 CategoryPool 相对于经典图池方法的改进。有趣的是,这项研究表明,在 SZ 患者中,左半球的低阶感知系统和高阶网络区域的功能障碍比右半球更为严重,这再次证实了左侧内侧额上回在 SZ 中的重要性。我们的代码见:https://github.com/swfen/Temporal-BCGCN。
{"title":"Temporal Dynamic Synchronous Functional Brain Network for Schizophrenia Classification and Lateralization Analysis.","authors":"Cheng Zhu, Ying Tan, Shuqi Yang, Jiaqing Miao, Jiayi Zhu, Huan Huang, Dezhong Yao, Cheng Luo","doi":"10.1109/TMI.2024.3419041","DOIUrl":"https://doi.org/10.1109/TMI.2024.3419041","url":null,"abstract":"<p><p>Available evidence suggests that dynamic functional connectivity can capture time-varying abnormalities in brain activity in resting-state cerebral functional magnetic resonance imaging (rs-fMRI) data and has a natural advantage in uncovering mechanisms of abnormal brain activity in schizophrenia (SZ) patients. Hence, an advanced dynamic brain network analysis model called the temporal brain category graph convolutional network (Temporal-BCGCN) was employed. Firstly, a unique dynamic brain network analysis module, DSF-BrainNet, was designed to construct dynamic synchronization features. Subsequently, a revolutionary graph convolution method, TemporalConv, was proposed based on the synchronous temporal properties of features. Finally, the first modular test tool for abnormal hemispherical lateralization in deep learning based on rs-fMRI data, named CategoryPool, was proposed. This study was validated on COBRE and UCLA datasets and achieved 83.62% and 89.71% average accuracies, respectively, outperforming the baseline model and other state-of-the-art methods. The ablation results also demonstrate the advantages of TemporalConv over the traditional edge feature graph convolution approach and the improvement of CategoryPool over the classical graph pooling approach. Interestingly, this study showed that the lower-order perceptual system and higher-order network regions in the left hemisphere are more severely dysfunctional than in the right hemisphere in SZ, reaffirmings the importance of the left medial superior frontal gyrus in SZ. Our code was available at: https://github.com/swfen/Temporal-BCGCN.</p>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141452465","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE transactions on medical imaging
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1