首页 > 最新文献

IEEE Transactions on Medical Imaging最新文献

英文 中文
Continuous 3D Myocardial Motion Tracking via Echocardiography 通过超声心动图进行连续三维心肌运动跟踪
IF 10.6 1区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2024-06-27 DOI: 10.1109/tmi.2024.3419780
Chengkang Shen, Hao Zhu, You Zhou, Yu Liu, Si Yi, Lili Dong, Weipeng Zhao, David J. Brady, Xun Cao, Zhan Ma, Yi Lin
{"title":"Continuous 3D Myocardial Motion Tracking via Echocardiography","authors":"Chengkang Shen, Hao Zhu, You Zhou, Yu Liu, Si Yi, Lili Dong, Weipeng Zhao, David J. Brady, Xun Cao, Zhan Ma, Yi Lin","doi":"10.1109/tmi.2024.3419780","DOIUrl":"https://doi.org/10.1109/tmi.2024.3419780","url":null,"abstract":"","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":null,"pages":null},"PeriodicalIF":10.6,"publicationDate":"2024-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141462351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IMJENSE: Scan-specific Implicit Representation for Joint Coil Sensitivity and Image Estimation in Parallel MRI IMJENSE:用于并行磁共振成像中关节线圈灵敏度和图像估计的特定扫描隐式表示法
IF 10.6 1区 医学 Q1 Health Professions Pub Date : 2023-11-21 DOI: 10.48550/arXiv.2311.12892
Rui-jun Feng, Qing Wu, Jie Feng, Huajun She, Chunlei Liu, Yuyao Zhang, Hongjiang Wei
Parallel imaging is a commonly used technique to accelerate magnetic resonance imaging (MRI) data acquisition. Mathematically, parallel MRI reconstruction can be formulated as an inverse problem relating the sparsely sampled k-space measurements to the desired MRI image. Despite the success of many existing reconstruction algorithms, it remains a challenge to reliably reconstruct a high-quality image from highly reduced k-space measurements. Recently, implicit neural representation has emerged as a powerful paradigm to exploit the internal information and the physics of partially acquired data to generate the desired object. In this study, we introduced IMJENSE, a scan-specific implicit neural representation-based method for improving parallel MRI reconstruction. Specifically, the underlying MRI image and coil sensitivities were modeled as continuous functions of spatial coordinates, parameterized by neural networks and polynomials, respectively. The weights in the networks and coefficients in the polynomials were simultaneously learned directly from sparsely acquired k-space measurements, without fully sampled ground truth data for training. Benefiting from the powerful continuous representation and joint estimation of the MRI image and coil sensitivities, IMJENSE outperforms conventional image or k-space domain reconstruction algorithms. With extremely limited calibration data, IMJENSE is more stable than supervised calibrationless and calibration-based deep-learning methods. Results show that IMJENSE robustly reconstructs the images acquired at 5× and 6× accelerations with only 4 or 8 calibration lines in 2D Cartesian acquisitions, corresponding to 22.0% and 19.5% undersampling rates. The high-quality results and scanning specificity make the proposed method hold the potential for further accelerating the data acquisition of parallel MRI.
并行成像是加速磁共振成像(MRI)数据采集的常用技术。从数学上讲,并行磁共振成像重建可表述为一个将稀疏采样的 k 空间测量值与所需磁共振成像图像相关联的逆问题。尽管许多现有的重建算法都取得了成功,但要从高度缩小的 k 空间测量数据中可靠地重建出高质量的图像,仍然是一项挑战。最近,隐式神经表征作为一种强大的范例出现了,它能利用部分获取数据的内部信息和物理特性生成所需的对象。在这项研究中,我们引入了 IMJENSE,这是一种基于特定扫描的隐式神经表征方法,用于改进并行 MRI 重建。具体来说,基础 MRI 图像和线圈灵敏度被建模为空间坐标的连续函数,分别由神经网络和多项式参数化。神经网络中的权重和多项式中的系数同时直接从稀疏获取的 k 空间测量数据中学习,而不需要完全采样的地面实况数据进行训练。得益于强大的连续表示法以及对磁共振成像和线圈灵敏度的联合估计,IMJENSE 优于传统的图像或 k 空间域重建算法。在校准数据极其有限的情况下,IMJENSE 比无监督校准和基于校准的深度学习方法更加稳定。结果表明,在二维笛卡尔采集中,IMJENSE 仅用 4 或 8 条校准线就能稳健地重建以 5 倍和 6 倍加速度采集的图像,这相当于 22.0% 和 19.5% 的欠采样率。高质量的结果和扫描特异性使所提出的方法有望进一步加速并行磁共振成像的数据采集。
{"title":"IMJENSE: Scan-specific Implicit Representation for Joint Coil Sensitivity and Image Estimation in Parallel MRI","authors":"Rui-jun Feng, Qing Wu, Jie Feng, Huajun She, Chunlei Liu, Yuyao Zhang, Hongjiang Wei","doi":"10.48550/arXiv.2311.12892","DOIUrl":"https://doi.org/10.48550/arXiv.2311.12892","url":null,"abstract":"Parallel imaging is a commonly used technique to accelerate magnetic resonance imaging (MRI) data acquisition. Mathematically, parallel MRI reconstruction can be formulated as an inverse problem relating the sparsely sampled k-space measurements to the desired MRI image. Despite the success of many existing reconstruction algorithms, it remains a challenge to reliably reconstruct a high-quality image from highly reduced k-space measurements. Recently, implicit neural representation has emerged as a powerful paradigm to exploit the internal information and the physics of partially acquired data to generate the desired object. In this study, we introduced IMJENSE, a scan-specific implicit neural representation-based method for improving parallel MRI reconstruction. Specifically, the underlying MRI image and coil sensitivities were modeled as continuous functions of spatial coordinates, parameterized by neural networks and polynomials, respectively. The weights in the networks and coefficients in the polynomials were simultaneously learned directly from sparsely acquired k-space measurements, without fully sampled ground truth data for training. Benefiting from the powerful continuous representation and joint estimation of the MRI image and coil sensitivities, IMJENSE outperforms conventional image or k-space domain reconstruction algorithms. With extremely limited calibration data, IMJENSE is more stable than supervised calibrationless and calibration-based deep-learning methods. Results show that IMJENSE robustly reconstructs the images acquired at 5× and 6× accelerations with only 4 or 8 calibration lines in 2D Cartesian acquisitions, corresponding to 22.0% and 19.5% undersampling rates. The high-quality results and scanning specificity make the proposed method hold the potential for further accelerating the data acquisition of parallel MRI.","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":null,"pages":null},"PeriodicalIF":10.6,"publicationDate":"2023-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139254245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Learnable Counter-condition Analysis Framework for Functional Connectivity-based Neurological Disorder Diagnosis 基于功能连接的神经疾病诊断的可学习反条件分析框架
IF 10.6 1区 医学 Q1 Health Professions Pub Date : 2023-10-06 DOI: 10.48550/arXiv.2310.03964
Eunsong Kang, Da-Woon Heo, Jiwon Lee, Heung-Il Suk
To understand the biological characteristics of neurological disorders with functional connectivity (FC), recent studies have widely utilized deep learning-based models to identify the disease and conducted post-hoc analyses via explainable models to discover disease-related biomarkers. Most existing frameworks consist of three stages, namely, feature selection, feature extraction for classification, and analysis, where each stage is implemented separately. However, if the results at each stage lack reliability, it can cause misdiagnosis and incorrect analysis in afterward stages. In this study, we propose a novel unified framework that systemically integrates diagnoses (i.e., feature selection and feature extraction) and explanations. Notably, we devised an adaptive attention network as a feature selection approach to identify individual-specific disease-related connections. We also propose a functional network relational encoder that summarizes the global topological properties of FC by learning the inter-network relations without pre-defined edges between functional networks. Last but not least, our framework provides a novel explanatory power for neuroscientific interpretation, also termed counter-condition analysis. We simulated the FC that reverses the diagnostic information (i.e., counter-condition FC): converting a normal brain to be abnormal and vice versa. We validated the effectiveness of our framework by using two large resting-state functional magnetic resonance imaging (fMRI) datasets, Autism Brain Imaging Data Exchange (ABIDE) and REST-meta-MDD, and demonstrated that our framework outperforms other competing methods for disease identification. Furthermore, we analyzed the disease-related neurological patterns based on counter-condition analysis.
为了通过功能连接(FC)了解神经系统疾病的生物学特征,近年来的研究广泛利用基于深度学习的模型来识别疾病,并通过可解释模型进行事后分析,以发现与疾病相关的生物标记物。现有框架大多包括三个阶段,即特征选择、特征提取分类和分析,其中每个阶段都是单独实现的。然而,如果每个阶段的结果缺乏可靠性,就会导致后面阶段的误诊和错误分析。在本研究中,我们提出了一个新颖的统一框架,系统地整合了诊断(即特征选择和特征提取)和解释。值得注意的是,我们设计了一种自适应注意力网络作为特征选择方法,以识别个体特异性疾病相关连接。我们还提出了一种功能网络关系编码器,该编码器通过学习功能网络之间的网络关系来总结功能网络的全局拓扑特性,而无需预先定义功能网络之间的边缘。最后但并非最不重要的一点是,我们的框架为神经科学解释提供了一种新的解释能力,也称为反条件分析。我们模拟了反转诊断信息的功能网络(即反条件功能网络):将正常大脑转换为异常大脑,反之亦然。我们利用两个大型静息态功能磁共振成像(fMRI)数据集--自闭症脑成像数据交换(ABIDE)和 REST-meta-MDD 验证了我们框架的有效性,并证明我们的框架在疾病识别方面优于其他竞争方法。此外,我们还基于反条件分析法分析了与疾病相关的神经模式。
{"title":"A Learnable Counter-condition Analysis Framework for Functional Connectivity-based Neurological Disorder Diagnosis","authors":"Eunsong Kang, Da-Woon Heo, Jiwon Lee, Heung-Il Suk","doi":"10.48550/arXiv.2310.03964","DOIUrl":"https://doi.org/10.48550/arXiv.2310.03964","url":null,"abstract":"To understand the biological characteristics of neurological disorders with functional connectivity (FC), recent studies have widely utilized deep learning-based models to identify the disease and conducted post-hoc analyses via explainable models to discover disease-related biomarkers. Most existing frameworks consist of three stages, namely, feature selection, feature extraction for classification, and analysis, where each stage is implemented separately. However, if the results at each stage lack reliability, it can cause misdiagnosis and incorrect analysis in afterward stages. In this study, we propose a novel unified framework that systemically integrates diagnoses (i.e., feature selection and feature extraction) and explanations. Notably, we devised an adaptive attention network as a feature selection approach to identify individual-specific disease-related connections. We also propose a functional network relational encoder that summarizes the global topological properties of FC by learning the inter-network relations without pre-defined edges between functional networks. Last but not least, our framework provides a novel explanatory power for neuroscientific interpretation, also termed counter-condition analysis. We simulated the FC that reverses the diagnostic information (i.e., counter-condition FC): converting a normal brain to be abnormal and vice versa. We validated the effectiveness of our framework by using two large resting-state functional magnetic resonance imaging (fMRI) datasets, Autism Brain Imaging Data Exchange (ABIDE) and REST-meta-MDD, and demonstrated that our framework outperforms other competing methods for disease identification. Furthermore, we analyzed the disease-related neurological patterns based on counter-condition analysis.","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":null,"pages":null},"PeriodicalIF":10.6,"publicationDate":"2023-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139322196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Masked conditional variational autoencoders for chromosome straightening 用于染色体拉直的掩蔽条件变分自动编码器
IF 10.6 1区 医学 Q1 Health Professions Pub Date : 2023-06-25 DOI: 10.48550/arXiv.2306.14129
Jingxiong Li, S. Zheng, Zhongyi Shui, Shichuan Zhang, Linyi Yang, Yuxuan Sun, Yunlong Zhang, Honglin Li, Y. Ye, P. V. Ooijen, Kang Li, Lin Yang
Karyotyping is of importance for detecting chromosomal aberrations in human disease. However, chromosomes easily appear curved in microscopic images, which prevents cytogeneticists from analyzing chromosome types. To address this issue, we propose a framework for chromosome straightening, which comprises a preliminary processing algorithm and a generative model called masked conditional variational autoencoders (MC-VAE). The processing method utilizes patch rearrangement to address the difficulty in erasing low degrees of curvature, providing reasonable preliminary results for the MC-VAE. The MC-VAE further straightens the results by leveraging chromosome patches conditioned on their curvatures to learn the mapping between banding patterns and conditions. During model training, we apply a masking strategy with a high masking ratio to train the MC-VAE with eliminated redundancy. This yields a non-trivial reconstruction task, allowing the model to effectively preserve chromosome banding patterns and structure details in the reconstructed results. Extensive experiments on three public datasets with two stain styles show that our framework surpasses the performance of state-of-the-art methods in retaining banding patterns and structure details. Compared to using real-world bent chromosomes, the use of high-quality straightened chromosomes generated by our proposed method can improve the performance of various deep learning models for chromosome classification by a large margin. Such a straightening approach has the potential to be combined with other karyotyping systems to assist cytogeneticists in chromosome analysis.
核型分析对于检测人类疾病中的染色体畸变具有重要意义。然而,染色体在显微镜图像中很容易出现弯曲,这阻碍了细胞遗传学家分析染色体类型。为了解决这个问题,我们提出了一个染色体矫正框架,该框架包括一个初步处理算法和一个称为掩码条件变分自编码器(MC-VAE)的生成模型。该处理方法利用斑块重排解决了低曲率的擦除困难,为MC-VAE提供了合理的初步结果。MC-VAE通过利用曲率条件下的染色体斑块来学习条带模式和条件之间的映射,进一步矫正了结果。在模型训练过程中,我们采用高掩蔽率的掩蔽策略来训练消除冗余的MC-VAE。这产生了一个重要的重建任务,允许模型在重建结果中有效地保留染色体带模式和结构细节。在三个具有两种染色风格的公共数据集上进行的大量实验表明,我们的框架在保留条带模式和结构细节方面优于最先进的方法。与使用真实世界的弯曲染色体相比,使用我们提出的方法生成的高质量的直染色体可以大大提高各种深度学习模型对染色体分类的性能。这种矫正方法有可能与其他核型系统相结合,以协助细胞遗传学家进行染色体分析。
{"title":"Masked conditional variational autoencoders for chromosome straightening","authors":"Jingxiong Li, S. Zheng, Zhongyi Shui, Shichuan Zhang, Linyi Yang, Yuxuan Sun, Yunlong Zhang, Honglin Li, Y. Ye, P. V. Ooijen, Kang Li, Lin Yang","doi":"10.48550/arXiv.2306.14129","DOIUrl":"https://doi.org/10.48550/arXiv.2306.14129","url":null,"abstract":"Karyotyping is of importance for detecting chromosomal aberrations in human disease. However, chromosomes easily appear curved in microscopic images, which prevents cytogeneticists from analyzing chromosome types. To address this issue, we propose a framework for chromosome straightening, which comprises a preliminary processing algorithm and a generative model called masked conditional variational autoencoders (MC-VAE). The processing method utilizes patch rearrangement to address the difficulty in erasing low degrees of curvature, providing reasonable preliminary results for the MC-VAE. The MC-VAE further straightens the results by leveraging chromosome patches conditioned on their curvatures to learn the mapping between banding patterns and conditions. During model training, we apply a masking strategy with a high masking ratio to train the MC-VAE with eliminated redundancy. This yields a non-trivial reconstruction task, allowing the model to effectively preserve chromosome banding patterns and structure details in the reconstructed results. Extensive experiments on three public datasets with two stain styles show that our framework surpasses the performance of state-of-the-art methods in retaining banding patterns and structure details. Compared to using real-world bent chromosomes, the use of high-quality straightened chromosomes generated by our proposed method can improve the performance of various deep learning models for chromosome classification by a large margin. Such a straightening approach has the potential to be combined with other karyotyping systems to assist cytogeneticists in chromosome analysis.","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":null,"pages":null},"PeriodicalIF":10.6,"publicationDate":"2023-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47136654","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Laplacian Pyramid Based Generative H&E Stain Augmentation Network 基于拉普拉斯金字塔的生成H&E染色增强网络
IF 10.6 1区 医学 Q1 Health Professions Pub Date : 2023-05-23 DOI: 10.48550/arXiv.2305.14301
Fangda Li, Zhiqiang Hu, Wen Chen, A. Kak
Hematoxylin and Eosin (H&E) staining is a widely used sample preparation procedure for enhancing the saturation of tissue sections and the contrast between nuclei and cytoplasm in histology images for medical diagnostics. However, various factors, such as the differences in the reagents used, result in high variability in the colors of the stains actually recorded. This variability poses a challenge in achieving generalization for machine-learning based computer-aided diagnostic tools. To desensitize the learned models to stain variations, we propose the Generative Stain Augmentation Network (G-SAN) - a GAN-based framework that augments a collection of cell images with simulated yet realistic stain variations. At its core, G-SAN uses a novel and highly computationally efficient Laplacian Pyramid (LP) based generator architecture, that is capable of disentangling stain from cell morphology. Through the task of patch classification and nucleus segmentation, we show that using G-SAN-augmented training data provides on average 15.7% improvement in F1 score and 7.3% improvement in panoptic quality, respectively. Our code is available at https://github.com/lifangda01/GSAN-Demo.
苏木精和伊红(H&E)染色是一种广泛使用的样品制备方法,用于增强组织切片的饱和度以及医学诊断组织学图像中细胞核和细胞质的对比。然而,各种因素,如所用试剂的差异,导致实际记录的污渍颜色变化很大。这种可变性对实现基于机器学习的计算机辅助诊断工具的泛化提出了挑战。为了使学习到的模型对染色变化不敏感,我们提出了生成染色增强网络(G-SAN)——一种基于gan的框架,通过模拟但现实的染色变化来增强细胞图像集合。在其核心,G-SAN使用了一种新颖的、计算效率很高的基于拉普拉斯金字塔(LP)的生成器架构,能够从细胞形态中分离出染色。通过斑块分类和核分割任务,我们发现使用g - san增强的训练数据,F1得分平均提高15.7%,全视质量平均提高7.3%。我们的代码可在https://github.com/lifangda01/GSAN-Demo上获得。
{"title":"A Laplacian Pyramid Based Generative H&E Stain Augmentation Network","authors":"Fangda Li, Zhiqiang Hu, Wen Chen, A. Kak","doi":"10.48550/arXiv.2305.14301","DOIUrl":"https://doi.org/10.48550/arXiv.2305.14301","url":null,"abstract":"Hematoxylin and Eosin (H&E) staining is a widely used sample preparation procedure for enhancing the saturation of tissue sections and the contrast between nuclei and cytoplasm in histology images for medical diagnostics. However, various factors, such as the differences in the reagents used, result in high variability in the colors of the stains actually recorded. This variability poses a challenge in achieving generalization for machine-learning based computer-aided diagnostic tools. To desensitize the learned models to stain variations, we propose the Generative Stain Augmentation Network (G-SAN) - a GAN-based framework that augments a collection of cell images with simulated yet realistic stain variations. At its core, G-SAN uses a novel and highly computationally efficient Laplacian Pyramid (LP) based generator architecture, that is capable of disentangling stain from cell morphology. Through the task of patch classification and nucleus segmentation, we show that using G-SAN-augmented training data provides on average 15.7% improvement in F1 score and 7.3% improvement in panoptic quality, respectively. Our code is available at https://github.com/lifangda01/GSAN-Demo.","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":null,"pages":null},"PeriodicalIF":10.6,"publicationDate":"2023-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46662981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Deep Learning for Retrospective Motion Correction in MRI: A Comprehensive Review 深度学习在MRI回顾性运动矫正中的应用综述
IF 10.6 1区 医学 Q1 Health Professions Pub Date : 2023-05-11 DOI: 10.48550/arXiv.2305.06739
Veronika Spieker, H. Eichhorn, K. Hammernik, D. Rueckert, C. Preibisch, D. Karampinos, J. Schnabel
Motion represents one of the major challenges in magnetic resonance imaging (MRI). Since the MR signal is acquired in frequency space, any motion of the imaged object leads to complex artefacts in the reconstructed image in addition to other MR imaging artefacts. Deep learning has been frequently proposed for motion correction at several stages of the reconstruction process. The wide range of MR acquisition sequences, anatomies and pathologies of interest, and motion patterns (rigid vs. deformable and random vs. regular) makes a comprehensive solution unlikely. To facilitate the transfer of ideas between different applications, this review provides a detailed overview of proposed methods for learning-based motion correction in MRI together with their common challenges and potentials. This review identifies differences and synergies in underlying data usage, architectures, training and evaluation strategies. We critically discuss general trends and outline future directions, with the aim to enhance interaction between different application areas and research fields.
运动是磁共振成像(MRI)的主要挑战之一。由于MR信号是在频率空间中获取的,因此除了其他MR成像伪影之外,成像对象的任何运动都会导致重建图像中的复杂伪影。深度学习经常被提出用于重建过程的几个阶段的运动校正。广泛的MR采集序列、感兴趣的解剖结构和病理学以及运动模式(刚性与可变形、随机与规则)使综合解决方案变得不太可能。为了促进不同应用之间的思想交流,这篇综述详细概述了MRI中基于学习的运动校正方法,以及它们的常见挑战和潜力。这篇综述确定了基础数据使用、架构、培训和评估策略方面的差异和协同作用。我们批判性地讨论了总体趋势并概述了未来的方向,目的是加强不同应用领域和研究领域之间的互动。
{"title":"Deep Learning for Retrospective Motion Correction in MRI: A Comprehensive Review","authors":"Veronika Spieker, H. Eichhorn, K. Hammernik, D. Rueckert, C. Preibisch, D. Karampinos, J. Schnabel","doi":"10.48550/arXiv.2305.06739","DOIUrl":"https://doi.org/10.48550/arXiv.2305.06739","url":null,"abstract":"Motion represents one of the major challenges in magnetic resonance imaging (MRI). Since the MR signal is acquired in frequency space, any motion of the imaged object leads to complex artefacts in the reconstructed image in addition to other MR imaging artefacts. Deep learning has been frequently proposed for motion correction at several stages of the reconstruction process. The wide range of MR acquisition sequences, anatomies and pathologies of interest, and motion patterns (rigid vs. deformable and random vs. regular) makes a comprehensive solution unlikely. To facilitate the transfer of ideas between different applications, this review provides a detailed overview of proposed methods for learning-based motion correction in MRI together with their common challenges and potentials. This review identifies differences and synergies in underlying data usage, architectures, training and evaluation strategies. We critically discuss general trends and outline future directions, with the aim to enhance interaction between different application areas and research fields.","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":null,"pages":null},"PeriodicalIF":10.6,"publicationDate":"2023-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42662913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
FVP: Fourier Visual Prompting for Source-Free Unsupervised Domain Adaptation of Medical Image Segmentation 基于傅立叶视觉提示的无源无监督域医学图像分割
IF 10.6 1区 医学 Q1 Health Professions Pub Date : 2023-04-26 DOI: 10.48550/arXiv.2304.13672
Yan Wang, Jian Cheng, Yixin Chen, Shuai Shao, Lanyun Zhu, Zhenzhou Wu, T. Liu, Haogang Zhu
Medical image segmentation methods normally perform poorly when there is a domain shift between training and testing data. Unsupervised Domain Adaptation (UDA) addresses the domain shift problem by training the model using both labeled data from the source domain and unlabeled data from the target domain. Source-Free UDA (SFUDA) was recently proposed for UDA without requiring the source data during the adaptation, due to data privacy or data transmission issues, which normally adapts the pre-trained deep model in the testing stage. However, in real clinical scenarios of medical image segmentation, the trained model is normally frozen in the testing stage. In this paper, we propose Fourier Visual Prompting (FVP) for SFUDA of medical image segmentation. Inspired by prompting learning in natural language processing, FVP steers the frozen pre-trained model to perform well in the target domain by adding a visual prompt to the input target data. In FVP, the visual prompt is parameterized using only a small amount of low-frequency learnable parameters in the input frequency space, and is learned by minimizing the segmentation loss between the predicted segmentation of the prompted target image and reliable pseudo segmentation label of the target image under the frozen model. To our knowledge, FVP is the first work to apply visual prompts to SFUDA for medical image segmentation. The proposed FVP is validated using three public datasets, and experiments demonstrate that FVP yields better segmentation results, compared with various existing methods.
当训练和测试数据之间存在域偏移时,医学图像分割方法通常表现不佳。无监督域自适应(UDA)通过使用来自源域的标记数据和来自目标域的未标记数据来训练模型来解决域偏移问题。由于数据隐私或数据传输问题,最近为UDA提出了无源UDA(SFUDA),而在自适应过程中不需要源数据,这通常会在测试阶段自适应预先训练的深度模型。然而,在医学图像分割的真实临床场景中,训练的模型通常在测试阶段被冻结。在本文中,我们提出了用于医学图像分割的SFUDA的傅立叶视觉提示(FVP)。受自然语言处理中提示学习的启发,FVP通过在输入目标数据中添加视觉提示,引导冻结的预训练模型在目标域中表现良好。在FVP中,视觉提示仅使用输入频率空间中的少量低频可学习参数进行参数化,并通过最小化提示目标图像的预测分割和冻结模型下目标图像的可靠伪分割标签之间的分割损失来学习。据我们所知,FVP是第一个将视觉提示应用于SFUDA进行医学图像分割的工作。使用三个公共数据集验证了所提出的FVP,实验表明,与现有的各种方法相比,FVP产生了更好的分割结果。
{"title":"FVP: Fourier Visual Prompting for Source-Free Unsupervised Domain Adaptation of Medical Image Segmentation","authors":"Yan Wang, Jian Cheng, Yixin Chen, Shuai Shao, Lanyun Zhu, Zhenzhou Wu, T. Liu, Haogang Zhu","doi":"10.48550/arXiv.2304.13672","DOIUrl":"https://doi.org/10.48550/arXiv.2304.13672","url":null,"abstract":"Medical image segmentation methods normally perform poorly when there is a domain shift between training and testing data. Unsupervised Domain Adaptation (UDA) addresses the domain shift problem by training the model using both labeled data from the source domain and unlabeled data from the target domain. Source-Free UDA (SFUDA) was recently proposed for UDA without requiring the source data during the adaptation, due to data privacy or data transmission issues, which normally adapts the pre-trained deep model in the testing stage. However, in real clinical scenarios of medical image segmentation, the trained model is normally frozen in the testing stage. In this paper, we propose Fourier Visual Prompting (FVP) for SFUDA of medical image segmentation. Inspired by prompting learning in natural language processing, FVP steers the frozen pre-trained model to perform well in the target domain by adding a visual prompt to the input target data. In FVP, the visual prompt is parameterized using only a small amount of low-frequency learnable parameters in the input frequency space, and is learned by minimizing the segmentation loss between the predicted segmentation of the prompted target image and reliable pseudo segmentation label of the target image under the frozen model. To our knowledge, FVP is the first work to apply visual prompts to SFUDA for medical image segmentation. The proposed FVP is validated using three public datasets, and experiments demonstrate that FVP yields better segmentation results, compared with various existing methods.","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":null,"pages":null},"PeriodicalIF":10.6,"publicationDate":"2023-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43372805","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Point-supervised Single-cell Segmentation via Collaborative Knowledge Sharing 基于协作知识共享的点监督单细胞分割
IF 10.6 1区 医学 Q1 Health Professions Pub Date : 2023-04-20 DOI: 10.48550/arXiv.2304.10671
Ji Yu
Despite their superior performance, deep-learning methods often suffer from the disadvantage of needing large-scale well-annotated training data. In response, recent literature has seen a proliferation of efforts aimed at reducing the annotation burden. This paper focuses on a weakly-supervised training setting for single-cell segmentation models, where the only available training label is the rough locations of individual cells. The specific problem is of practical interest due to the widely available nuclei counter-stain data in biomedical literature, from which the cell locations can be derived programmatically. Of more general interest is a proposed self-learning method called collaborative knowledge sharing, which is related to but distinct from the more well-known consistency learning methods. This strategy achieves self-learning by sharing knowledge between a principal model and a very light-weight collaborator model. Importantly, the two models are entirely different in their architectures, capacities, and model outputs: In our case, the principal model approaches the segmentation problem from an object-detection perspective, whereas the collaborator model a sematic segmentation perspective. We assessed the effectiveness of this strategy by conducting experiments on LIVECell, a large single-cell segmentation dataset of bright-field images, and on A431 dataset, a fluorescence image dataset in which the location labels are generated automatically from nuclei counter-stain data. Implementing code is available at https://github.com/jiyuuchc/lacss.
尽管深度学习方法具有优异的性能,但其缺点是需要大量经过良好注释的训练数据。作为回应,最近的文献已经看到了旨在减少注释负担的努力的扩散。本文主要研究单细胞分割模型的弱监督训练设置,其中唯一可用的训练标签是单个细胞的粗略位置。由于生物医学文献中广泛可用的细胞核反染色数据,可以通过编程推导细胞位置,因此具体问题具有实际意义。更普遍的兴趣是一种被提出的自我学习方法,称为协作知识共享,它与更知名的一致性学习方法相关,但又不同。该策略通过在主体模型和轻量级合作者模型之间共享知识来实现自我学习。重要的是,这两个模型在体系结构、能力和模型输出方面完全不同:在我们的例子中,主模型从对象检测的角度处理分割问题,而协作模型从语义分割的角度处理分割问题。我们通过在LIVECell(一个大型单细胞分割数据集的亮场图像)和A431(一个荧光图像数据集,其中位置标签是由细胞核反染色数据自动生成的)上进行实验来评估该策略的有效性。实现代码可从https://github.com/jiyuuchc/lacss获得。
{"title":"Point-supervised Single-cell Segmentation via Collaborative Knowledge Sharing","authors":"Ji Yu","doi":"10.48550/arXiv.2304.10671","DOIUrl":"https://doi.org/10.48550/arXiv.2304.10671","url":null,"abstract":"Despite their superior performance, deep-learning methods often suffer from the disadvantage of needing large-scale well-annotated training data. In response, recent literature has seen a proliferation of efforts aimed at reducing the annotation burden. This paper focuses on a weakly-supervised training setting for single-cell segmentation models, where the only available training label is the rough locations of individual cells. The specific problem is of practical interest due to the widely available nuclei counter-stain data in biomedical literature, from which the cell locations can be derived programmatically. Of more general interest is a proposed self-learning method called collaborative knowledge sharing, which is related to but distinct from the more well-known consistency learning methods. This strategy achieves self-learning by sharing knowledge between a principal model and a very light-weight collaborator model. Importantly, the two models are entirely different in their architectures, capacities, and model outputs: In our case, the principal model approaches the segmentation problem from an object-detection perspective, whereas the collaborator model a sematic segmentation perspective. We assessed the effectiveness of this strategy by conducting experiments on LIVECell, a large single-cell segmentation dataset of bright-field images, and on A431 dataset, a fluorescence image dataset in which the location labels are generated automatically from nuclei counter-stain data. Implementing code is available at https://github.com/jiyuuchc/lacss.","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":null,"pages":null},"PeriodicalIF":10.6,"publicationDate":"2023-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42725429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ideal Observer Computation by Use of Markov-Chain Monte Carlo with Generative Adversarial Networks 基于生成对抗网络的马尔可夫链蒙特卡罗理想观测器计算
IF 10.6 1区 医学 Q1 Health Professions Pub Date : 2023-04-02 DOI: 10.48550/arXiv.2304.00433
Weimin Zhou, Umberto Villa, M. Anastasio
Medical imaging systems are often evaluated and optimized via objective, or task-specific, measures of image quality (IQ) that quantify the performance of an observer on a specific clinically-relevant task. The performance of the Bayesian Ideal Observer (IO) sets an upper limit among all observers, numerical or human, and has been advocated for use as a figure-of-merit (FOM) for evaluating and optimizing medical imaging systems. However, the IO test statistic corresponds to the likelihood ratio that is intractable to compute in the majority of cases. A sampling-based method that employs Markov-Chain Monte Carlo (MCMC) techniques was previously proposed to estimate the IO performance. However, current applications of MCMC methods for IO approximation have been limited to a small number of situations where the considered distribution of to-be-imaged objects can be described by a relatively simple stochastic object model (SOM). As such, there remains an important need to extend the domain of applicability of MCMC methods to address a large variety of scenarios where IO-based assessments are needed but the associated SOMs have not been available. In this study, a novel MCMC method that employs a generative adversarial network (GAN)-based SOM, referred to as MCMC-GAN, is described and evaluated. The MCMC-GAN method was quantitatively validated by use of test-cases for which reference solutions were available. The results demonstrate that the MCMC-GAN method can extend the domain of applicability of MCMC methods for conducting IO analyses of medical imaging systems.
医学成像系统通常通过客观的或特定任务的图像质量(IQ)测量来评估和优化,该测量量化了观察者在特定临床相关任务中的表现。贝叶斯理想观测者(IO)的性能在所有观测者中设定了上限,无论是数值观测者还是人类观测者,并且被提倡用作评估和优化医学成像系统的价值图(FOM)。然而,IO测试统计量对应于在大多数情况下难以计算的似然比。先前提出了一种基于采样的方法,采用马尔可夫链蒙特卡罗(MCMC)技术来估计IO性能。然而,目前MCMC方法在IO近似中的应用仅限于少数情况,其中待成像对象的考虑分布可以通过相对简单的随机对象模型(SOM)来描述。因此,仍然非常需要扩展MCMC方法的适用范围,以解决需要基于io的评估但尚未提供相关som的各种情况。在本研究中,描述和评估了一种新的MCMC方法,该方法采用了基于生成对抗网络(GAN)的SOM,称为MCMC-GAN。MCMC-GAN方法通过使用可获得参考溶液的测试用例进行了定量验证。结果表明,MCMC- gan方法可以扩展MCMC方法在医学成像系统IO分析中的适用范围。
{"title":"Ideal Observer Computation by Use of Markov-Chain Monte Carlo with Generative Adversarial Networks","authors":"Weimin Zhou, Umberto Villa, M. Anastasio","doi":"10.48550/arXiv.2304.00433","DOIUrl":"https://doi.org/10.48550/arXiv.2304.00433","url":null,"abstract":"Medical imaging systems are often evaluated and optimized via objective, or task-specific, measures of image quality (IQ) that quantify the performance of an observer on a specific clinically-relevant task. The performance of the Bayesian Ideal Observer (IO) sets an upper limit among all observers, numerical or human, and has been advocated for use as a figure-of-merit (FOM) for evaluating and optimizing medical imaging systems. However, the IO test statistic corresponds to the likelihood ratio that is intractable to compute in the majority of cases. A sampling-based method that employs Markov-Chain Monte Carlo (MCMC) techniques was previously proposed to estimate the IO performance. However, current applications of MCMC methods for IO approximation have been limited to a small number of situations where the considered distribution of to-be-imaged objects can be described by a relatively simple stochastic object model (SOM). As such, there remains an important need to extend the domain of applicability of MCMC methods to address a large variety of scenarios where IO-based assessments are needed but the associated SOMs have not been available. In this study, a novel MCMC method that employs a generative adversarial network (GAN)-based SOM, referred to as MCMC-GAN, is described and evaluated. The MCMC-GAN method was quantitatively validated by use of test-cases for which reference solutions were available. The results demonstrate that the MCMC-GAN method can extend the domain of applicability of MCMC methods for conducting IO analyses of medical imaging systems.","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":null,"pages":null},"PeriodicalIF":10.6,"publicationDate":"2023-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41342898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Aligning Multi-Sequence CMR Towards Fully Automated Myocardial Pathology Segmentation 将多序列CMR对准全自动心肌病理分割
IF 10.6 1区 医学 Q1 Health Professions Pub Date : 2023-02-07 DOI: 10.48550/arXiv.2302.03537
Wangbin Ding, Lei Li, Junyi Qiu, Sihan Wang, Liqin Huang, Yinyin Chen, Shan Yang, X. Zhuang
Myocardial pathology segmentation (MyoPS) is critical for the risk stratification and treatment planning of myocardial infarction (MI). Multi-sequence cardiac magnetic resonance (MS-CMR) images can provide valuable information. For instance, balanced steady-state free precession cine sequences present clear anatomical boundaries, while late gadolinium enhancement and T2-weighted CMR sequences visualize myocardial scar and edema of MI, respectively. Existing methods usually fuse anatomical and pathological information from different CMR sequences for MyoPS, but assume that these images have been spatially aligned. However, MS-CMR images are usually unaligned due to the respiratory motions in clinical practices, which poses additional challenges for MyoPS. This work presents an automatic MyoPS framework for unaligned MS-CMR images. Specifically, we design a combined computing model for simultaneous image registration and information fusion, which aggregates multi-sequence features into a common space to extract anatomical structures (i.e., myocardium). Consequently, we can highlight the informative regions in the common space via the extracted myocardium to improve MyoPS performance, considering the spatial relationship between myocardial pathologies and myocardium. Experiments on a private MS-CMR dataset and a public dataset from the MYOPS2020 challenge show that our framework could achieve promising performance for fully automatic MyoPS.
心肌病理分割(MyoPS)对于心肌梗死(MI)的风险分层和治疗计划至关重要。多序列心脏磁共振(MS-CMR)图像可以提供有价值的信息。例如,平衡的稳态自由进动影像序列显示清晰的解剖边界,而晚期钆增强和t2加权CMR序列分别显示心肌疤痕和心肌水肿。现有的方法通常融合来自不同CMR序列的MyoPS的解剖和病理信息,但假设这些图像已经在空间上对齐。然而,在临床实践中,由于呼吸运动,MS-CMR图像通常不对齐,这给MyoPS带来了额外的挑战。这项工作提出了一个自动MyoPS框架,用于未对齐的MS-CMR图像。具体而言,我们设计了一种同时进行图像配准和信息融合的组合计算模型,该模型将多序列特征聚集到一个公共空间中,以提取解剖结构(即心肌)。因此,考虑到心肌病理与心肌之间的空间关系,我们可以通过提取的心肌在公共空间中突出信息区域,以提高MyoPS的性能。在MYOPS2020挑战的私有MS-CMR数据集和公共数据集上的实验表明,我们的框架可以在全自动MyoPS中取得令人满意的性能。
{"title":"Aligning Multi-Sequence CMR Towards Fully Automated Myocardial Pathology Segmentation","authors":"Wangbin Ding, Lei Li, Junyi Qiu, Sihan Wang, Liqin Huang, Yinyin Chen, Shan Yang, X. Zhuang","doi":"10.48550/arXiv.2302.03537","DOIUrl":"https://doi.org/10.48550/arXiv.2302.03537","url":null,"abstract":"Myocardial pathology segmentation (MyoPS) is critical for the risk stratification and treatment planning of myocardial infarction (MI). Multi-sequence cardiac magnetic resonance (MS-CMR) images can provide valuable information. For instance, balanced steady-state free precession cine sequences present clear anatomical boundaries, while late gadolinium enhancement and T2-weighted CMR sequences visualize myocardial scar and edema of MI, respectively. Existing methods usually fuse anatomical and pathological information from different CMR sequences for MyoPS, but assume that these images have been spatially aligned. However, MS-CMR images are usually unaligned due to the respiratory motions in clinical practices, which poses additional challenges for MyoPS. This work presents an automatic MyoPS framework for unaligned MS-CMR images. Specifically, we design a combined computing model for simultaneous image registration and information fusion, which aggregates multi-sequence features into a common space to extract anatomical structures (i.e., myocardium). Consequently, we can highlight the informative regions in the common space via the extracted myocardium to improve MyoPS performance, considering the spatial relationship between myocardial pathologies and myocardium. Experiments on a private MS-CMR dataset and a public dataset from the MYOPS2020 challenge show that our framework could achieve promising performance for fully automatic MyoPS.","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":null,"pages":null},"PeriodicalIF":10.6,"publicationDate":"2023-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45904269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Medical Imaging
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1