首页 > 最新文献

IEEE Transactions on Medical Imaging最新文献

英文 中文
Masked conditional variational autoencoders for chromosome straightening 用于染色体拉直的掩蔽条件变分自动编码器
IF 10.6 1区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2023-06-25 DOI: 10.48550/arXiv.2306.14129
Jingxiong Li, S. Zheng, Zhongyi Shui, Shichuan Zhang, Linyi Yang, Yuxuan Sun, Yunlong Zhang, Honglin Li, Y. Ye, P. V. Ooijen, Kang Li, Lin Yang
Karyotyping is of importance for detecting chromosomal aberrations in human disease. However, chromosomes easily appear curved in microscopic images, which prevents cytogeneticists from analyzing chromosome types. To address this issue, we propose a framework for chromosome straightening, which comprises a preliminary processing algorithm and a generative model called masked conditional variational autoencoders (MC-VAE). The processing method utilizes patch rearrangement to address the difficulty in erasing low degrees of curvature, providing reasonable preliminary results for the MC-VAE. The MC-VAE further straightens the results by leveraging chromosome patches conditioned on their curvatures to learn the mapping between banding patterns and conditions. During model training, we apply a masking strategy with a high masking ratio to train the MC-VAE with eliminated redundancy. This yields a non-trivial reconstruction task, allowing the model to effectively preserve chromosome banding patterns and structure details in the reconstructed results. Extensive experiments on three public datasets with two stain styles show that our framework surpasses the performance of state-of-the-art methods in retaining banding patterns and structure details. Compared to using real-world bent chromosomes, the use of high-quality straightened chromosomes generated by our proposed method can improve the performance of various deep learning models for chromosome classification by a large margin. Such a straightening approach has the potential to be combined with other karyotyping systems to assist cytogeneticists in chromosome analysis.
核型分析对于检测人类疾病中的染色体畸变具有重要意义。然而,染色体在显微镜图像中很容易出现弯曲,这阻碍了细胞遗传学家分析染色体类型。为了解决这个问题,我们提出了一个染色体矫正框架,该框架包括一个初步处理算法和一个称为掩码条件变分自编码器(MC-VAE)的生成模型。该处理方法利用斑块重排解决了低曲率的擦除困难,为MC-VAE提供了合理的初步结果。MC-VAE通过利用曲率条件下的染色体斑块来学习条带模式和条件之间的映射,进一步矫正了结果。在模型训练过程中,我们采用高掩蔽率的掩蔽策略来训练消除冗余的MC-VAE。这产生了一个重要的重建任务,允许模型在重建结果中有效地保留染色体带模式和结构细节。在三个具有两种染色风格的公共数据集上进行的大量实验表明,我们的框架在保留条带模式和结构细节方面优于最先进的方法。与使用真实世界的弯曲染色体相比,使用我们提出的方法生成的高质量的直染色体可以大大提高各种深度学习模型对染色体分类的性能。这种矫正方法有可能与其他核型系统相结合,以协助细胞遗传学家进行染色体分析。
{"title":"Masked conditional variational autoencoders for chromosome straightening","authors":"Jingxiong Li, S. Zheng, Zhongyi Shui, Shichuan Zhang, Linyi Yang, Yuxuan Sun, Yunlong Zhang, Honglin Li, Y. Ye, P. V. Ooijen, Kang Li, Lin Yang","doi":"10.48550/arXiv.2306.14129","DOIUrl":"https://doi.org/10.48550/arXiv.2306.14129","url":null,"abstract":"Karyotyping is of importance for detecting chromosomal aberrations in human disease. However, chromosomes easily appear curved in microscopic images, which prevents cytogeneticists from analyzing chromosome types. To address this issue, we propose a framework for chromosome straightening, which comprises a preliminary processing algorithm and a generative model called masked conditional variational autoencoders (MC-VAE). The processing method utilizes patch rearrangement to address the difficulty in erasing low degrees of curvature, providing reasonable preliminary results for the MC-VAE. The MC-VAE further straightens the results by leveraging chromosome patches conditioned on their curvatures to learn the mapping between banding patterns and conditions. During model training, we apply a masking strategy with a high masking ratio to train the MC-VAE with eliminated redundancy. This yields a non-trivial reconstruction task, allowing the model to effectively preserve chromosome banding patterns and structure details in the reconstructed results. Extensive experiments on three public datasets with two stain styles show that our framework surpasses the performance of state-of-the-art methods in retaining banding patterns and structure details. Compared to using real-world bent chromosomes, the use of high-quality straightened chromosomes generated by our proposed method can improve the performance of various deep learning models for chromosome classification by a large margin. Such a straightening approach has the potential to be combined with other karyotyping systems to assist cytogeneticists in chromosome analysis.","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":" ","pages":""},"PeriodicalIF":10.6,"publicationDate":"2023-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47136654","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Laplacian Pyramid Based Generative H&E Stain Augmentation Network 基于拉普拉斯金字塔的生成H&E染色增强网络
IF 10.6 1区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2023-05-23 DOI: 10.48550/arXiv.2305.14301
Fangda Li, Zhiqiang Hu, Wen Chen, A. Kak
Hematoxylin and Eosin (H&E) staining is a widely used sample preparation procedure for enhancing the saturation of tissue sections and the contrast between nuclei and cytoplasm in histology images for medical diagnostics. However, various factors, such as the differences in the reagents used, result in high variability in the colors of the stains actually recorded. This variability poses a challenge in achieving generalization for machine-learning based computer-aided diagnostic tools. To desensitize the learned models to stain variations, we propose the Generative Stain Augmentation Network (G-SAN) - a GAN-based framework that augments a collection of cell images with simulated yet realistic stain variations. At its core, G-SAN uses a novel and highly computationally efficient Laplacian Pyramid (LP) based generator architecture, that is capable of disentangling stain from cell morphology. Through the task of patch classification and nucleus segmentation, we show that using G-SAN-augmented training data provides on average 15.7% improvement in F1 score and 7.3% improvement in panoptic quality, respectively. Our code is available at https://github.com/lifangda01/GSAN-Demo.
苏木精和伊红(H&E)染色是一种广泛使用的样品制备方法,用于增强组织切片的饱和度以及医学诊断组织学图像中细胞核和细胞质的对比。然而,各种因素,如所用试剂的差异,导致实际记录的污渍颜色变化很大。这种可变性对实现基于机器学习的计算机辅助诊断工具的泛化提出了挑战。为了使学习到的模型对染色变化不敏感,我们提出了生成染色增强网络(G-SAN)——一种基于gan的框架,通过模拟但现实的染色变化来增强细胞图像集合。在其核心,G-SAN使用了一种新颖的、计算效率很高的基于拉普拉斯金字塔(LP)的生成器架构,能够从细胞形态中分离出染色。通过斑块分类和核分割任务,我们发现使用g - san增强的训练数据,F1得分平均提高15.7%,全视质量平均提高7.3%。我们的代码可在https://github.com/lifangda01/GSAN-Demo上获得。
{"title":"A Laplacian Pyramid Based Generative H&E Stain Augmentation Network","authors":"Fangda Li, Zhiqiang Hu, Wen Chen, A. Kak","doi":"10.48550/arXiv.2305.14301","DOIUrl":"https://doi.org/10.48550/arXiv.2305.14301","url":null,"abstract":"Hematoxylin and Eosin (H&E) staining is a widely used sample preparation procedure for enhancing the saturation of tissue sections and the contrast between nuclei and cytoplasm in histology images for medical diagnostics. However, various factors, such as the differences in the reagents used, result in high variability in the colors of the stains actually recorded. This variability poses a challenge in achieving generalization for machine-learning based computer-aided diagnostic tools. To desensitize the learned models to stain variations, we propose the Generative Stain Augmentation Network (G-SAN) - a GAN-based framework that augments a collection of cell images with simulated yet realistic stain variations. At its core, G-SAN uses a novel and highly computationally efficient Laplacian Pyramid (LP) based generator architecture, that is capable of disentangling stain from cell morphology. Through the task of patch classification and nucleus segmentation, we show that using G-SAN-augmented training data provides on average 15.7% improvement in F1 score and 7.3% improvement in panoptic quality, respectively. Our code is available at https://github.com/lifangda01/GSAN-Demo.","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":" ","pages":""},"PeriodicalIF":10.6,"publicationDate":"2023-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46662981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Deep Learning for Retrospective Motion Correction in MRI: A Comprehensive Review 深度学习在MRI回顾性运动矫正中的应用综述
IF 10.6 1区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2023-05-11 DOI: 10.48550/arXiv.2305.06739
Veronika Spieker, H. Eichhorn, K. Hammernik, D. Rueckert, C. Preibisch, D. Karampinos, J. Schnabel
Motion represents one of the major challenges in magnetic resonance imaging (MRI). Since the MR signal is acquired in frequency space, any motion of the imaged object leads to complex artefacts in the reconstructed image in addition to other MR imaging artefacts. Deep learning has been frequently proposed for motion correction at several stages of the reconstruction process. The wide range of MR acquisition sequences, anatomies and pathologies of interest, and motion patterns (rigid vs. deformable and random vs. regular) makes a comprehensive solution unlikely. To facilitate the transfer of ideas between different applications, this review provides a detailed overview of proposed methods for learning-based motion correction in MRI together with their common challenges and potentials. This review identifies differences and synergies in underlying data usage, architectures, training and evaluation strategies. We critically discuss general trends and outline future directions, with the aim to enhance interaction between different application areas and research fields.
运动是磁共振成像(MRI)的主要挑战之一。由于MR信号是在频率空间中获取的,因此除了其他MR成像伪影之外,成像对象的任何运动都会导致重建图像中的复杂伪影。深度学习经常被提出用于重建过程的几个阶段的运动校正。广泛的MR采集序列、感兴趣的解剖结构和病理学以及运动模式(刚性与可变形、随机与规则)使综合解决方案变得不太可能。为了促进不同应用之间的思想交流,这篇综述详细概述了MRI中基于学习的运动校正方法,以及它们的常见挑战和潜力。这篇综述确定了基础数据使用、架构、培训和评估策略方面的差异和协同作用。我们批判性地讨论了总体趋势并概述了未来的方向,目的是加强不同应用领域和研究领域之间的互动。
{"title":"Deep Learning for Retrospective Motion Correction in MRI: A Comprehensive Review","authors":"Veronika Spieker, H. Eichhorn, K. Hammernik, D. Rueckert, C. Preibisch, D. Karampinos, J. Schnabel","doi":"10.48550/arXiv.2305.06739","DOIUrl":"https://doi.org/10.48550/arXiv.2305.06739","url":null,"abstract":"Motion represents one of the major challenges in magnetic resonance imaging (MRI). Since the MR signal is acquired in frequency space, any motion of the imaged object leads to complex artefacts in the reconstructed image in addition to other MR imaging artefacts. Deep learning has been frequently proposed for motion correction at several stages of the reconstruction process. The wide range of MR acquisition sequences, anatomies and pathologies of interest, and motion patterns (rigid vs. deformable and random vs. regular) makes a comprehensive solution unlikely. To facilitate the transfer of ideas between different applications, this review provides a detailed overview of proposed methods for learning-based motion correction in MRI together with their common challenges and potentials. This review identifies differences and synergies in underlying data usage, architectures, training and evaluation strategies. We critically discuss general trends and outline future directions, with the aim to enhance interaction between different application areas and research fields.","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":" ","pages":""},"PeriodicalIF":10.6,"publicationDate":"2023-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42662913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
FVP: Fourier Visual Prompting for Source-Free Unsupervised Domain Adaptation of Medical Image Segmentation 基于傅立叶视觉提示的无源无监督域医学图像分割
IF 10.6 1区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2023-04-26 DOI: 10.48550/arXiv.2304.13672
Yan Wang, Jian Cheng, Yixin Chen, Shuai Shao, Lanyun Zhu, Zhenzhou Wu, T. Liu, Haogang Zhu
Medical image segmentation methods normally perform poorly when there is a domain shift between training and testing data. Unsupervised Domain Adaptation (UDA) addresses the domain shift problem by training the model using both labeled data from the source domain and unlabeled data from the target domain. Source-Free UDA (SFUDA) was recently proposed for UDA without requiring the source data during the adaptation, due to data privacy or data transmission issues, which normally adapts the pre-trained deep model in the testing stage. However, in real clinical scenarios of medical image segmentation, the trained model is normally frozen in the testing stage. In this paper, we propose Fourier Visual Prompting (FVP) for SFUDA of medical image segmentation. Inspired by prompting learning in natural language processing, FVP steers the frozen pre-trained model to perform well in the target domain by adding a visual prompt to the input target data. In FVP, the visual prompt is parameterized using only a small amount of low-frequency learnable parameters in the input frequency space, and is learned by minimizing the segmentation loss between the predicted segmentation of the prompted target image and reliable pseudo segmentation label of the target image under the frozen model. To our knowledge, FVP is the first work to apply visual prompts to SFUDA for medical image segmentation. The proposed FVP is validated using three public datasets, and experiments demonstrate that FVP yields better segmentation results, compared with various existing methods.
当训练和测试数据之间存在域偏移时,医学图像分割方法通常表现不佳。无监督域自适应(UDA)通过使用来自源域的标记数据和来自目标域的未标记数据来训练模型来解决域偏移问题。由于数据隐私或数据传输问题,最近为UDA提出了无源UDA(SFUDA),而在自适应过程中不需要源数据,这通常会在测试阶段自适应预先训练的深度模型。然而,在医学图像分割的真实临床场景中,训练的模型通常在测试阶段被冻结。在本文中,我们提出了用于医学图像分割的SFUDA的傅立叶视觉提示(FVP)。受自然语言处理中提示学习的启发,FVP通过在输入目标数据中添加视觉提示,引导冻结的预训练模型在目标域中表现良好。在FVP中,视觉提示仅使用输入频率空间中的少量低频可学习参数进行参数化,并通过最小化提示目标图像的预测分割和冻结模型下目标图像的可靠伪分割标签之间的分割损失来学习。据我们所知,FVP是第一个将视觉提示应用于SFUDA进行医学图像分割的工作。使用三个公共数据集验证了所提出的FVP,实验表明,与现有的各种方法相比,FVP产生了更好的分割结果。
{"title":"FVP: Fourier Visual Prompting for Source-Free Unsupervised Domain Adaptation of Medical Image Segmentation","authors":"Yan Wang, Jian Cheng, Yixin Chen, Shuai Shao, Lanyun Zhu, Zhenzhou Wu, T. Liu, Haogang Zhu","doi":"10.48550/arXiv.2304.13672","DOIUrl":"https://doi.org/10.48550/arXiv.2304.13672","url":null,"abstract":"Medical image segmentation methods normally perform poorly when there is a domain shift between training and testing data. Unsupervised Domain Adaptation (UDA) addresses the domain shift problem by training the model using both labeled data from the source domain and unlabeled data from the target domain. Source-Free UDA (SFUDA) was recently proposed for UDA without requiring the source data during the adaptation, due to data privacy or data transmission issues, which normally adapts the pre-trained deep model in the testing stage. However, in real clinical scenarios of medical image segmentation, the trained model is normally frozen in the testing stage. In this paper, we propose Fourier Visual Prompting (FVP) for SFUDA of medical image segmentation. Inspired by prompting learning in natural language processing, FVP steers the frozen pre-trained model to perform well in the target domain by adding a visual prompt to the input target data. In FVP, the visual prompt is parameterized using only a small amount of low-frequency learnable parameters in the input frequency space, and is learned by minimizing the segmentation loss between the predicted segmentation of the prompted target image and reliable pseudo segmentation label of the target image under the frozen model. To our knowledge, FVP is the first work to apply visual prompts to SFUDA for medical image segmentation. The proposed FVP is validated using three public datasets, and experiments demonstrate that FVP yields better segmentation results, compared with various existing methods.","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":" ","pages":""},"PeriodicalIF":10.6,"publicationDate":"2023-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43372805","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Point-supervised Single-cell Segmentation via Collaborative Knowledge Sharing 基于协作知识共享的点监督单细胞分割
IF 10.6 1区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2023-04-20 DOI: 10.48550/arXiv.2304.10671
Ji Yu
Despite their superior performance, deep-learning methods often suffer from the disadvantage of needing large-scale well-annotated training data. In response, recent literature has seen a proliferation of efforts aimed at reducing the annotation burden. This paper focuses on a weakly-supervised training setting for single-cell segmentation models, where the only available training label is the rough locations of individual cells. The specific problem is of practical interest due to the widely available nuclei counter-stain data in biomedical literature, from which the cell locations can be derived programmatically. Of more general interest is a proposed self-learning method called collaborative knowledge sharing, which is related to but distinct from the more well-known consistency learning methods. This strategy achieves self-learning by sharing knowledge between a principal model and a very light-weight collaborator model. Importantly, the two models are entirely different in their architectures, capacities, and model outputs: In our case, the principal model approaches the segmentation problem from an object-detection perspective, whereas the collaborator model a sematic segmentation perspective. We assessed the effectiveness of this strategy by conducting experiments on LIVECell, a large single-cell segmentation dataset of bright-field images, and on A431 dataset, a fluorescence image dataset in which the location labels are generated automatically from nuclei counter-stain data. Implementing code is available at https://github.com/jiyuuchc/lacss.
尽管深度学习方法具有优异的性能,但其缺点是需要大量经过良好注释的训练数据。作为回应,最近的文献已经看到了旨在减少注释负担的努力的扩散。本文主要研究单细胞分割模型的弱监督训练设置,其中唯一可用的训练标签是单个细胞的粗略位置。由于生物医学文献中广泛可用的细胞核反染色数据,可以通过编程推导细胞位置,因此具体问题具有实际意义。更普遍的兴趣是一种被提出的自我学习方法,称为协作知识共享,它与更知名的一致性学习方法相关,但又不同。该策略通过在主体模型和轻量级合作者模型之间共享知识来实现自我学习。重要的是,这两个模型在体系结构、能力和模型输出方面完全不同:在我们的例子中,主模型从对象检测的角度处理分割问题,而协作模型从语义分割的角度处理分割问题。我们通过在LIVECell(一个大型单细胞分割数据集的亮场图像)和A431(一个荧光图像数据集,其中位置标签是由细胞核反染色数据自动生成的)上进行实验来评估该策略的有效性。实现代码可从https://github.com/jiyuuchc/lacss获得。
{"title":"Point-supervised Single-cell Segmentation via Collaborative Knowledge Sharing","authors":"Ji Yu","doi":"10.48550/arXiv.2304.10671","DOIUrl":"https://doi.org/10.48550/arXiv.2304.10671","url":null,"abstract":"Despite their superior performance, deep-learning methods often suffer from the disadvantage of needing large-scale well-annotated training data. In response, recent literature has seen a proliferation of efforts aimed at reducing the annotation burden. This paper focuses on a weakly-supervised training setting for single-cell segmentation models, where the only available training label is the rough locations of individual cells. The specific problem is of practical interest due to the widely available nuclei counter-stain data in biomedical literature, from which the cell locations can be derived programmatically. Of more general interest is a proposed self-learning method called collaborative knowledge sharing, which is related to but distinct from the more well-known consistency learning methods. This strategy achieves self-learning by sharing knowledge between a principal model and a very light-weight collaborator model. Importantly, the two models are entirely different in their architectures, capacities, and model outputs: In our case, the principal model approaches the segmentation problem from an object-detection perspective, whereas the collaborator model a sematic segmentation perspective. We assessed the effectiveness of this strategy by conducting experiments on LIVECell, a large single-cell segmentation dataset of bright-field images, and on A431 dataset, a fluorescence image dataset in which the location labels are generated automatically from nuclei counter-stain data. Implementing code is available at https://github.com/jiyuuchc/lacss.","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":" ","pages":""},"PeriodicalIF":10.6,"publicationDate":"2023-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42725429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ideal Observer Computation by Use of Markov-Chain Monte Carlo with Generative Adversarial Networks 基于生成对抗网络的马尔可夫链蒙特卡罗理想观测器计算
IF 10.6 1区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2023-04-02 DOI: 10.48550/arXiv.2304.00433
Weimin Zhou, Umberto Villa, M. Anastasio
Medical imaging systems are often evaluated and optimized via objective, or task-specific, measures of image quality (IQ) that quantify the performance of an observer on a specific clinically-relevant task. The performance of the Bayesian Ideal Observer (IO) sets an upper limit among all observers, numerical or human, and has been advocated for use as a figure-of-merit (FOM) for evaluating and optimizing medical imaging systems. However, the IO test statistic corresponds to the likelihood ratio that is intractable to compute in the majority of cases. A sampling-based method that employs Markov-Chain Monte Carlo (MCMC) techniques was previously proposed to estimate the IO performance. However, current applications of MCMC methods for IO approximation have been limited to a small number of situations where the considered distribution of to-be-imaged objects can be described by a relatively simple stochastic object model (SOM). As such, there remains an important need to extend the domain of applicability of MCMC methods to address a large variety of scenarios where IO-based assessments are needed but the associated SOMs have not been available. In this study, a novel MCMC method that employs a generative adversarial network (GAN)-based SOM, referred to as MCMC-GAN, is described and evaluated. The MCMC-GAN method was quantitatively validated by use of test-cases for which reference solutions were available. The results demonstrate that the MCMC-GAN method can extend the domain of applicability of MCMC methods for conducting IO analyses of medical imaging systems.
医学成像系统通常通过客观的或特定任务的图像质量(IQ)测量来评估和优化,该测量量化了观察者在特定临床相关任务中的表现。贝叶斯理想观测者(IO)的性能在所有观测者中设定了上限,无论是数值观测者还是人类观测者,并且被提倡用作评估和优化医学成像系统的价值图(FOM)。然而,IO测试统计量对应于在大多数情况下难以计算的似然比。先前提出了一种基于采样的方法,采用马尔可夫链蒙特卡罗(MCMC)技术来估计IO性能。然而,目前MCMC方法在IO近似中的应用仅限于少数情况,其中待成像对象的考虑分布可以通过相对简单的随机对象模型(SOM)来描述。因此,仍然非常需要扩展MCMC方法的适用范围,以解决需要基于io的评估但尚未提供相关som的各种情况。在本研究中,描述和评估了一种新的MCMC方法,该方法采用了基于生成对抗网络(GAN)的SOM,称为MCMC-GAN。MCMC-GAN方法通过使用可获得参考溶液的测试用例进行了定量验证。结果表明,MCMC- gan方法可以扩展MCMC方法在医学成像系统IO分析中的适用范围。
{"title":"Ideal Observer Computation by Use of Markov-Chain Monte Carlo with Generative Adversarial Networks","authors":"Weimin Zhou, Umberto Villa, M. Anastasio","doi":"10.48550/arXiv.2304.00433","DOIUrl":"https://doi.org/10.48550/arXiv.2304.00433","url":null,"abstract":"Medical imaging systems are often evaluated and optimized via objective, or task-specific, measures of image quality (IQ) that quantify the performance of an observer on a specific clinically-relevant task. The performance of the Bayesian Ideal Observer (IO) sets an upper limit among all observers, numerical or human, and has been advocated for use as a figure-of-merit (FOM) for evaluating and optimizing medical imaging systems. However, the IO test statistic corresponds to the likelihood ratio that is intractable to compute in the majority of cases. A sampling-based method that employs Markov-Chain Monte Carlo (MCMC) techniques was previously proposed to estimate the IO performance. However, current applications of MCMC methods for IO approximation have been limited to a small number of situations where the considered distribution of to-be-imaged objects can be described by a relatively simple stochastic object model (SOM). As such, there remains an important need to extend the domain of applicability of MCMC methods to address a large variety of scenarios where IO-based assessments are needed but the associated SOMs have not been available. In this study, a novel MCMC method that employs a generative adversarial network (GAN)-based SOM, referred to as MCMC-GAN, is described and evaluated. The MCMC-GAN method was quantitatively validated by use of test-cases for which reference solutions were available. The results demonstrate that the MCMC-GAN method can extend the domain of applicability of MCMC methods for conducting IO analyses of medical imaging systems.","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"PP 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2023-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41342898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Aligning Multi-Sequence CMR Towards Fully Automated Myocardial Pathology Segmentation 将多序列CMR对准全自动心肌病理分割
IF 10.6 1区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2023-02-07 DOI: 10.48550/arXiv.2302.03537
Wangbin Ding, Lei Li, Junyi Qiu, Sihan Wang, Liqin Huang, Yinyin Chen, Shan Yang, X. Zhuang
Myocardial pathology segmentation (MyoPS) is critical for the risk stratification and treatment planning of myocardial infarction (MI). Multi-sequence cardiac magnetic resonance (MS-CMR) images can provide valuable information. For instance, balanced steady-state free precession cine sequences present clear anatomical boundaries, while late gadolinium enhancement and T2-weighted CMR sequences visualize myocardial scar and edema of MI, respectively. Existing methods usually fuse anatomical and pathological information from different CMR sequences for MyoPS, but assume that these images have been spatially aligned. However, MS-CMR images are usually unaligned due to the respiratory motions in clinical practices, which poses additional challenges for MyoPS. This work presents an automatic MyoPS framework for unaligned MS-CMR images. Specifically, we design a combined computing model for simultaneous image registration and information fusion, which aggregates multi-sequence features into a common space to extract anatomical structures (i.e., myocardium). Consequently, we can highlight the informative regions in the common space via the extracted myocardium to improve MyoPS performance, considering the spatial relationship between myocardial pathologies and myocardium. Experiments on a private MS-CMR dataset and a public dataset from the MYOPS2020 challenge show that our framework could achieve promising performance for fully automatic MyoPS.
心肌病理分割(MyoPS)对于心肌梗死(MI)的风险分层和治疗计划至关重要。多序列心脏磁共振(MS-CMR)图像可以提供有价值的信息。例如,平衡的稳态自由进动影像序列显示清晰的解剖边界,而晚期钆增强和t2加权CMR序列分别显示心肌疤痕和心肌水肿。现有的方法通常融合来自不同CMR序列的MyoPS的解剖和病理信息,但假设这些图像已经在空间上对齐。然而,在临床实践中,由于呼吸运动,MS-CMR图像通常不对齐,这给MyoPS带来了额外的挑战。这项工作提出了一个自动MyoPS框架,用于未对齐的MS-CMR图像。具体而言,我们设计了一种同时进行图像配准和信息融合的组合计算模型,该模型将多序列特征聚集到一个公共空间中,以提取解剖结构(即心肌)。因此,考虑到心肌病理与心肌之间的空间关系,我们可以通过提取的心肌在公共空间中突出信息区域,以提高MyoPS的性能。在MYOPS2020挑战的私有MS-CMR数据集和公共数据集上的实验表明,我们的框架可以在全自动MyoPS中取得令人满意的性能。
{"title":"Aligning Multi-Sequence CMR Towards Fully Automated Myocardial Pathology Segmentation","authors":"Wangbin Ding, Lei Li, Junyi Qiu, Sihan Wang, Liqin Huang, Yinyin Chen, Shan Yang, X. Zhuang","doi":"10.48550/arXiv.2302.03537","DOIUrl":"https://doi.org/10.48550/arXiv.2302.03537","url":null,"abstract":"Myocardial pathology segmentation (MyoPS) is critical for the risk stratification and treatment planning of myocardial infarction (MI). Multi-sequence cardiac magnetic resonance (MS-CMR) images can provide valuable information. For instance, balanced steady-state free precession cine sequences present clear anatomical boundaries, while late gadolinium enhancement and T2-weighted CMR sequences visualize myocardial scar and edema of MI, respectively. Existing methods usually fuse anatomical and pathological information from different CMR sequences for MyoPS, but assume that these images have been spatially aligned. However, MS-CMR images are usually unaligned due to the respiratory motions in clinical practices, which poses additional challenges for MyoPS. This work presents an automatic MyoPS framework for unaligned MS-CMR images. Specifically, we design a combined computing model for simultaneous image registration and information fusion, which aggregates multi-sequence features into a common space to extract anatomical structures (i.e., myocardium). Consequently, we can highlight the informative regions in the common space via the extracted myocardium to improve MyoPS performance, considering the spatial relationship between myocardial pathologies and myocardium. Experiments on a private MS-CMR dataset and a public dataset from the MYOPS2020 challenge show that our framework could achieve promising performance for fully automatic MyoPS.","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":" ","pages":""},"PeriodicalIF":10.6,"publicationDate":"2023-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45904269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AIROGS: Artificial Intelligence for RObust Glaucoma Screening Challenge AIROGS:RObust青光眼筛查挑战的人工智能
IF 10.6 1区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2023-02-03 DOI: 10.48550/arXiv.2302.01738
Coen de Vente, Koen A. Vermeer, Nicolas Jaccard, He Wang, Hongyi Sun, F. Khader, D. Truhn, Temirgali Aimyshev, Yerkebulan Zhanibekuly, Tien-Dung Le, A. Galdran, M. Ballester, G. Carneiro, G. DevikaR, S. HrishikeshP., Densen Puthussery, Hong Liu, Zekang Yang, Satoshi Kondo, S. Kasai, E. Wang, Ashritha Durvasula, J'onathan Heras, M. Zapata, Teresa Ara'ujo, Guilherme Aresta, Hrvoje Bogunovi'c, Mustafa Arikan, Y. Lee, Hyun Bin Cho, Y. Choi, Abdul Qayyum, Imran Razzak, B. Ginneken, H. Lemij, Clara I. S'anchez
The early detection of glaucoma is essential in preventing visual impairment. Artificial intelligence (AI) can be used to analyze color fundus photographs (CFPs) in a cost-effective manner, making glaucoma screening more accessible. While AI models for glaucoma screening from CFPs have shown promising results in laboratory settings, their performance decreases significantly in real-world scenarios due to the presence of out-of-distribution and low-quality images. To address this issue, we propose the Artificial Intelligence for Robust Glaucoma Screening (AIROGS) challenge. This challenge includes a large dataset of around 113,000 images from about 60,000 patients and 500 different screening centers, and encourages the development of algorithms that are robust to ungradable and unexpected input data. We evaluated solutions from 14 teams in this paper and found that the best teams performed similarly to a set of 20 expert ophthalmologists and optometrists. The highest-scoring team achieved an area under the receiver operating characteristic curve of 0.99 (95% CI: 0.98-0.99) for detecting ungradable images on-the-fly. Additionally, many of the algorithms showed robust performance when tested on three other publicly available datasets. These results demonstrate the feasibility of robust AI-enabled glaucoma screening.
青光眼的早期发现对预防视力损害至关重要。人工智能(AI)可以以经济有效的方式分析彩色眼底照片(CFPs),使青光眼筛查更容易实现。虽然用于CFPs青光眼筛查的人工智能模型在实验室环境中显示出了有希望的结果,但由于存在分布外和低质量的图像,它们在现实场景中的性能显著下降。为了解决这个问题,我们提出了人工智能稳健青光眼筛查(AIROGS)挑战。这项挑战包括来自约6万名患者和500个不同筛查中心的约11.3万张图像的大型数据集,并鼓励开发对不可分级和意外输入数据具有鲁棒性的算法。我们在本文中评估了14个团队的解决方案,发现最好的团队的表现与一组20名专家眼科医生和验光师相似。得分最高的团队在实时检测不可分级图像时,接收器工作特征曲线下的面积为0.99 (95% CI: 0.98-0.99)。此外,在其他三个公开可用的数据集上测试时,许多算法显示出强大的性能。这些结果证明了强大的人工智能青光眼筛查的可行性。
{"title":"AIROGS: Artificial Intelligence for RObust Glaucoma Screening Challenge","authors":"Coen de Vente, Koen A. Vermeer, Nicolas Jaccard, He Wang, Hongyi Sun, F. Khader, D. Truhn, Temirgali Aimyshev, Yerkebulan Zhanibekuly, Tien-Dung Le, A. Galdran, M. Ballester, G. Carneiro, G. DevikaR, S. HrishikeshP., Densen Puthussery, Hong Liu, Zekang Yang, Satoshi Kondo, S. Kasai, E. Wang, Ashritha Durvasula, J'onathan Heras, M. Zapata, Teresa Ara'ujo, Guilherme Aresta, Hrvoje Bogunovi'c, Mustafa Arikan, Y. Lee, Hyun Bin Cho, Y. Choi, Abdul Qayyum, Imran Razzak, B. Ginneken, H. Lemij, Clara I. S'anchez","doi":"10.48550/arXiv.2302.01738","DOIUrl":"https://doi.org/10.48550/arXiv.2302.01738","url":null,"abstract":"The early detection of glaucoma is essential in preventing visual impairment. Artificial intelligence (AI) can be used to analyze color fundus photographs (CFPs) in a cost-effective manner, making glaucoma screening more accessible. While AI models for glaucoma screening from CFPs have shown promising results in laboratory settings, their performance decreases significantly in real-world scenarios due to the presence of out-of-distribution and low-quality images. To address this issue, we propose the Artificial Intelligence for Robust Glaucoma Screening (AIROGS) challenge. This challenge includes a large dataset of around 113,000 images from about 60,000 patients and 500 different screening centers, and encourages the development of algorithms that are robust to ungradable and unexpected input data. We evaluated solutions from 14 teams in this paper and found that the best teams performed similarly to a set of 20 expert ophthalmologists and optometrists. The highest-scoring team achieved an area under the receiver operating characteristic curve of 0.99 (95% CI: 0.98-0.99) for detecting ungradable images on-the-fly. Additionally, many of the algorithms showed robust performance when tested on three other publicly available datasets. These results demonstrate the feasibility of robust AI-enabled glaucoma screening.","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":" ","pages":""},"PeriodicalIF":10.6,"publicationDate":"2023-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43317374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
DEQ-MPI: A Deep Equilibrium Reconstruction with Learned Consistency for Magnetic Particle Imaging. DEQ-MPI:具有学习一致性的磁粒子成像深度平衡重构。
IF 10.6 1区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2022-12-26 DOI: 10.1109/TMI.2023.3300704.
Alper Gungor, Baris Askin, D. Soydan, Can Barics Top, E. Saritas, Tolga cCukur
Magnetic particle imaging (MPI) offers unparalleled contrast and resolution for tracing magnetic nanoparticles. A common imaging procedure calibrates a system matrix (SM) that is used to reconstruct data from subsequent scans. The ill-posed reconstruction problem can be solved by simultaneously enforcing data consistency based on the SM and regularizing the solution based on an image prior. Traditional hand-crafted priors cannot capture the complex attributes of MPI images, whereas recent MPI methods based on learned priors can suffer from extensive inference times or limited generalization performance. Here, we introduce a novel physics-driven method for MPI reconstruction based on a deep equilibrium model with learned data consistency (DEQ-MPI). DEQ-MPI reconstructs images by augmenting neural networks into an iterative optimization, as inspired by unrolling methods in deep learning. Yet, conventional unrolling methods are computationally restricted to few iterations resulting in non-convergent solutions, and they use hand-crafted consistency measures that can yield suboptimal capture of the data distribution. DEQ-MPI instead trains an implicit mapping to maximize the quality of a convergent solution, and it incorporates a learned consistency measure to better account for the data distribution. Demonstrations on simulated and experimental data indicate that DEQ-MPI achieves superior image quality and competitive inference time to state-of-the-art MPI reconstruction methods.
磁颗粒成像(MPI)为追踪磁性纳米颗粒提供了无与伦比的对比度和分辨率。一个常见的成像程序校准系统矩阵(SM),该矩阵用于从后续扫描中重建数据。该方法可以在增强数据一致性的同时,利用先验图像对其进行正则化,从而解决非定常重构问题。传统的手工先验不能捕获MPI图像的复杂属性,而最近基于学习先验的MPI方法存在推理时间过长或泛化性能有限的问题。本文提出了一种基于学习数据一致性的深度平衡模型(DEQ-MPI)的物理驱动的MPI重建方法。DEQ-MPI通过将神经网络增强为迭代优化来重建图像,灵感来自深度学习中的展开方法。然而,传统的展开方法在计算上受限于很少的迭代,导致不收敛的解决方案,并且它们使用手工制作的一致性度量,可能会产生数据分布的次优捕获。相反,DEQ-MPI训练隐式映射来最大化收敛解决方案的质量,并且它结合了一个学习一致性度量来更好地解释数据分布。模拟和实验数据表明,DEQ-MPI在图像质量和推理时间上优于最先进的MPI重建方法。
{"title":"DEQ-MPI: A Deep Equilibrium Reconstruction with Learned Consistency for Magnetic Particle Imaging.","authors":"Alper Gungor, Baris Askin, D. Soydan, Can Barics Top, E. Saritas, Tolga cCukur","doi":"10.1109/TMI.2023.3300704.","DOIUrl":"https://doi.org/10.1109/TMI.2023.3300704.","url":null,"abstract":"Magnetic particle imaging (MPI) offers unparalleled contrast and resolution for tracing magnetic nanoparticles. A common imaging procedure calibrates a system matrix (SM) that is used to reconstruct data from subsequent scans. The ill-posed reconstruction problem can be solved by simultaneously enforcing data consistency based on the SM and regularizing the solution based on an image prior. Traditional hand-crafted priors cannot capture the complex attributes of MPI images, whereas recent MPI methods based on learned priors can suffer from extensive inference times or limited generalization performance. Here, we introduce a novel physics-driven method for MPI reconstruction based on a deep equilibrium model with learned data consistency (DEQ-MPI). DEQ-MPI reconstructs images by augmenting neural networks into an iterative optimization, as inspired by unrolling methods in deep learning. Yet, conventional unrolling methods are computationally restricted to few iterations resulting in non-convergent solutions, and they use hand-crafted consistency measures that can yield suboptimal capture of the data distribution. DEQ-MPI instead trains an implicit mapping to maximize the quality of a convergent solution, and it incorporates a learned consistency measure to better account for the data distribution. Demonstrations on simulated and experimental data indicate that DEQ-MPI achieves superior image quality and competitive inference time to state-of-the-art MPI reconstruction methods.","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":" ","pages":""},"PeriodicalIF":10.6,"publicationDate":"2022-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45398959","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Stable deep MRI reconstruction using Generative Priors 基于生成先验的稳定深部MRI重建
IF 10.6 1区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2022-10-25 DOI: 10.48550/arXiv.2210.13834
Martin Zach, F. Knoll, T. Pock
Data-driven approaches recently achieved remarkable success in magnetic resonance imaging (MRI) reconstruction, but integration into clinical routine remains challenging due to a lack of generalizability and interpretability. In this paper, we address these challenges in a unified framework based on generative image priors. We propose a novel deep neural network based regularizer which is trained in a generative setting on reference magnitude images only. After training, the regularizer encodes higher-level domain statistics which we demonstrate by synthesizing images without data. Embedding the trained model in a classical variational approach yields high-quality reconstructions irrespective of the sub-sampling pattern. In addition, the model shows stable behavior when confronted with out-of-distribution data in the form of contrast variation. Furthermore, a probabilistic interpretation provides a distribution of reconstructions and hence allows uncertainty quantification. To reconstruct parallel MRI, we propose a fast algorithm to jointly estimate the image and the sensitivity maps. The results demonstrate competitive performance, on par with state-of-the-art end-to-end deep learning methods, while preserving the flexibility with respect to sub-sampling patterns and allowing for uncertainty quantification.
数据驱动的方法最近在磁共振成像(MRI)重建中取得了显著的成功,但由于缺乏通用性和可解释性,将其整合到临床常规中仍然具有挑战性。在本文中,我们在基于生成图像先验的统一框架中解决了这些挑战。我们提出了一种新的基于深度神经网络的正则化器,该正则化器仅在参考量级图像的生成设置中进行训练。经过训练后,正则化器编码更高层次的域统计量,我们通过合成无数据的图像来演示。将训练好的模型嵌入经典的变分方法中,无论子采样模式如何,都能产生高质量的重建。此外,该模型在面对以反差变化形式出现的非分布数据时表现出稳定的行为。此外,概率解释提供了重建的分布,从而允许不确定性量化。为了重建并行MRI,我们提出了一种快速的联合估计图像和灵敏度图的算法。结果显示了具有竞争力的性能,与最先进的端到端深度学习方法相当,同时保留了子采样模式的灵活性,并允许不确定性量化。
{"title":"Stable deep MRI reconstruction using Generative Priors","authors":"Martin Zach, F. Knoll, T. Pock","doi":"10.48550/arXiv.2210.13834","DOIUrl":"https://doi.org/10.48550/arXiv.2210.13834","url":null,"abstract":"Data-driven approaches recently achieved remarkable success in magnetic resonance imaging (MRI) reconstruction, but integration into clinical routine remains challenging due to a lack of generalizability and interpretability. In this paper, we address these challenges in a unified framework based on generative image priors. We propose a novel deep neural network based regularizer which is trained in a generative setting on reference magnitude images only. After training, the regularizer encodes higher-level domain statistics which we demonstrate by synthesizing images without data. Embedding the trained model in a classical variational approach yields high-quality reconstructions irrespective of the sub-sampling pattern. In addition, the model shows stable behavior when confronted with out-of-distribution data in the form of contrast variation. Furthermore, a probabilistic interpretation provides a distribution of reconstructions and hence allows uncertainty quantification. To reconstruct parallel MRI, we propose a fast algorithm to jointly estimate the image and the sensitivity maps. The results demonstrate competitive performance, on par with state-of-the-art end-to-end deep learning methods, while preserving the flexibility with respect to sub-sampling patterns and allowing for uncertainty quantification.","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":" ","pages":""},"PeriodicalIF":10.6,"publicationDate":"2022-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45098550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
IEEE Transactions on Medical Imaging
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1