首页 > 最新文献

Proceedings. IEEE International Symposium on Biomedical Imaging最新文献

英文 中文
DART: DEFORMABLE ANATOMY-AWARE REGISTRATION TOOLKIT FOR LUNG CT REGISTRATION WITH KEYPOINTS SUPERVISION. dart:用于肺部 CT 注册的可变形解剖感知注册工具包,带关键点监督。
Pub Date : 2024-05-01 Epub Date: 2024-08-22 DOI: 10.1109/ISBI56570.2024.10635326
Yunzheng Zhu, Luoting Zhuang, Yannan Lin, Tengyue Zhang, Hossein Tabatabaei, Denise R Aberle, Ashley E Prosper, Aichi Chien, William Hsu

Spatially aligning two computed tomography (CT) scans of the lung using automated image registration techniques is a challenging task due to the deformable nature of the lung. However, existing deep-learning-based lung CT registration models are not trained with explicit anatomical knowledge. We propose the deformable anatomy-aware registration toolkit (DART), a masked autoencoder (MAE)-based approach, to improve the keypoint-supervised registration of lung CTs. Our method incorporates features from multiple decoders of networks trained to segment anatomical structures, including the lung, ribs, vertebrae, lobes, vessels, and airways, to ensure that the MAE learns relevant features corresponding to the anatomy of the lung. The pretrained weights of the transformer encoder and patch embeddings are then used as the initialization for the training of downstream registration. We compare DART to existing state-of-the-art registration models. Our experiments show that DART outperforms the baseline models (Voxelmorph, ViT-V-Net, and MAE-TransRNet) in terms of target registration error of both corrField-generated keypoints with 17%, 13%, and 9% relative improvement, respectively, and bounding box centers of nodules with 27%, 10%, and 4% relative improvement, respectively. Our implementation is available at https://github.com/yunzhengzhu/DART.

由于肺部的可变形性,使用自动图像配准技术对肺部的两个计算机断层扫描(CT)进行空间配准是一项具有挑战性的任务。然而,现有的基于深度学习的肺部 CT 配准模型并没有经过明确的解剖学知识训练。我们提出了基于掩码自动编码器(MAE)的可变形解剖感知配准工具包(DART),以改进肺部 CT 的关键点监督配准。我们的方法结合了为分割解剖结构(包括肺、肋骨、椎骨、肺叶、血管和气道)而训练的网络的多个解码器的特征,以确保 MAE 学习到与肺部解剖结构相对应的相关特征。然后,变压器编码器和斑块嵌入的预训练权重将用作下游配准训练的初始化。我们将 DART 与现有的最先进配准模型进行了比较。实验结果表明,在 corrField 生成的关键点的目标配准误差方面,DART 优于基线模型(Voxelmorph、ViT-V-Net 和 MAE-TransRNet),相对改进幅度分别为 17%、13% 和 9%;在结节的边界框中心方面,DART 优于基线模型,相对改进幅度分别为 27%、10% 和 4%。我们的实现方法可在 https://github.com/yunzhengzhu/DART 上查阅。
{"title":"DART: DEFORMABLE ANATOMY-AWARE REGISTRATION TOOLKIT FOR LUNG CT REGISTRATION WITH KEYPOINTS SUPERVISION.","authors":"Yunzheng Zhu, Luoting Zhuang, Yannan Lin, Tengyue Zhang, Hossein Tabatabaei, Denise R Aberle, Ashley E Prosper, Aichi Chien, William Hsu","doi":"10.1109/ISBI56570.2024.10635326","DOIUrl":"10.1109/ISBI56570.2024.10635326","url":null,"abstract":"<p><p>Spatially aligning two computed tomography (CT) scans of the lung using automated image registration techniques is a challenging task due to the deformable nature of the lung. However, existing deep-learning-based lung CT registration models are not trained with explicit anatomical knowledge. We propose the deformable anatomy-aware registration toolkit (DART), a masked autoencoder (MAE)-based approach, to improve the keypoint-supervised registration of lung CTs. Our method incorporates features from multiple decoders of networks trained to segment anatomical structures, including the lung, ribs, vertebrae, lobes, vessels, and airways, to ensure that the MAE learns relevant features corresponding to the anatomy of the lung. The pretrained weights of the transformer encoder and patch embeddings are then used as the initialization for the training of downstream registration. We compare DART to existing state-of-the-art registration models. Our experiments show that DART outperforms the baseline models (Voxelmorph, ViT-V-Net, and MAE-TransRNet) in terms of target registration error of both corrField-generated keypoints with 17%, 13%, and 9% relative improvement, respectively, and bounding box centers of nodules with 27%, 10%, and 4% relative improvement, respectively. Our implementation is available at https://github.com/yunzhengzhu/DART.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2024 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11412684/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142302994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MITIGATING OVER-SATURATED FLUORESCENCE IMAGES THROUGH A SEMI-SUPERVISED GENERATIVE ADVERSARIAL NETWORK.
Pub Date : 2024-05-01 Epub Date: 2024-08-22 DOI: 10.1109/isbi56570.2024.10635687
Shunxing Bao, Junlin Guo, Ho Hin Lee, Ruining Deng, Can Cui, Lucas W Remedios, Quan Liu, Qi Yang, Kaiwen Xu, Xin Yu, Jia Li, Yike Li, Joseph T Roland, Qi Liu, Ken S Lau, Keith T Wilson, Lori A Coburn, Bennett A Landman, Yuankai Huo

Multiplex immunofluorescence (MxIF) imaging is a critical tool in biomedical research, offering detailed insights into cell composition and spatial context. As an example, DAPI staining identifies cell nuclei, while CD20 staining helps segment cell membranes in MxIF. However, a persistent challenge in MxIF is saturation artifacts, which hinder single-cell level analysis in areas with over-saturated pixels. Traditional gamma correction methods for fixing saturation are limited, often incorrectly assuming uniform distribution of saturation, which is rarely the case in practice. This paper introduces a novel approach to correct saturation artifacts from a data-driven perspective. We introduce a two-stage, high-resolution hybrid generative adversarial network (HDmixGAN), which merges unpaired (CycleGAN) and paired (pix2pixHD) network architectures. This approach is designed to capitalize on the available small-scale paired data and the more extensive unpaired data from costly MxIF data. Specifically, we generate pseudo-paired data from large-scale unpaired over-saturated datasets with a CycleGAN, and train a Pix2pixGAN using both small-scale real and large-scale synthetic data derived from multiple DAPI staining rounds in MxIF. This method was validated against various baselines in a downstream nuclei detection task, improving the F1 score by 6% over the baseline. This is, to our knowledge, the first focused effort to address multi-round saturation in MxIF images, offering a specialized solution for enhancing cell analysis accuracy through improved image quality. The source code and implementation of the proposed method are available at https://github.com/MASILab/DAPIArtifactRemoval.git.

多重免疫荧光(MxIF)成像是生物医学研究的重要工具,它能让人详细了解细胞的组成和空间环境。例如,DAPI 染色可识别细胞核,而 CD20 染色则有助于在 MxIF 中分割细胞膜。然而,饱和伪影是 MxIF 一直面临的挑战,它阻碍了对像素过度饱和区域进行单细胞级分析。传统的伽玛校正方法在修复饱和度方面存在局限性,常常错误地假设饱和度均匀分布,而实际情况却很少如此。本文从数据驱动的角度出发,介绍了一种修正饱和度伪影的新方法。我们介绍了一种两阶段高分辨率混合生成对抗网络 (HDmixGAN),它融合了非配对(CycleGAN)和配对(pix2pixHD)网络架构。这种方法旨在利用现有的小规模配对数据和来自昂贵的 MxIF 数据的更广泛的非配对数据。具体来说,我们利用 CycleGAN 从大规模非配对过饱和数据集生成伪配对数据,并利用 MxIF 中多轮 DAPI 染色得到的小规模真实数据和大规模合成数据训练 Pix2pixGAN。这种方法在下游细胞核检测任务中与各种基线方法进行了对比验证,比基线方法提高了 6% 的 F1 分数。据我们所知,这是首次集中解决 MxIF 图像中的多轮饱和问题,为通过提高图像质量来增强细胞分析准确性提供了专门的解决方案。建议方法的源代码和实现方法可在 https://github.com/MASILab/DAPIArtifactRemoval.git 上获取。
{"title":"MITIGATING OVER-SATURATED FLUORESCENCE IMAGES THROUGH A SEMI-SUPERVISED GENERATIVE ADVERSARIAL NETWORK.","authors":"Shunxing Bao, Junlin Guo, Ho Hin Lee, Ruining Deng, Can Cui, Lucas W Remedios, Quan Liu, Qi Yang, Kaiwen Xu, Xin Yu, Jia Li, Yike Li, Joseph T Roland, Qi Liu, Ken S Lau, Keith T Wilson, Lori A Coburn, Bennett A Landman, Yuankai Huo","doi":"10.1109/isbi56570.2024.10635687","DOIUrl":"10.1109/isbi56570.2024.10635687","url":null,"abstract":"<p><p>Multiplex immunofluorescence (MxIF) imaging is a critical tool in biomedical research, offering detailed insights into cell composition and spatial context. As an example, DAPI staining identifies cell nuclei, while CD20 staining helps segment cell membranes in MxIF. However, a persistent challenge in MxIF is saturation artifacts, which hinder single-cell level analysis in areas with over-saturated pixels. Traditional gamma correction methods for fixing saturation are limited, often incorrectly assuming uniform distribution of saturation, which is rarely the case in practice. This paper introduces a novel approach to correct saturation artifacts from a data-driven perspective. We introduce a two-stage, high-resolution hybrid generative adversarial network (HDmixGAN), which merges unpaired (CycleGAN) and paired (pix2pixHD) network architectures. This approach is designed to capitalize on the available small-scale paired data and the more extensive unpaired data from costly MxIF data. Specifically, we generate pseudo-paired data from large-scale unpaired over-saturated datasets with a CycleGAN, and train a Pix2pixGAN using both small-scale real and large-scale synthetic data derived from multiple DAPI staining rounds in MxIF. This method was validated against various baselines in a downstream nuclei detection task, improving the F1 score by 6% over the baseline. This is, to our knowledge, the first focused effort to address multi-round saturation in MxIF images, offering a specialized solution for enhancing cell analysis accuracy through improved image quality. The source code and implementation of the proposed method are available at https://github.com/MASILab/DAPIArtifactRemoval.git.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2024 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11756911/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143049003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MV-Swin-T: MAMMOGRAM CLASSIFICATION WITH MULTI-VIEW SWIN TRANSFORMER. MV-Swin-T:利用多视角斯温变换器进行乳房 X 射线图分类。
Pub Date : 2024-05-01 Epub Date: 2024-08-22 DOI: 10.1109/isbi56570.2024.10635578
Sushmita Sarker, Prithul Sarker, George Bebis, Alireza Tavakkoli

Traditional deep learning approaches for breast cancer classification has predominantly concentrated on single-view analysis. In clinical practice, however, radiologists concurrently examine all views within a mammography exam, leveraging the inherent correlations in these views to effectively detect tumors. Acknowledging the significance of multi-view analysis, some studies have introduced methods that independently process mammogram views, either through distinct convolutional branches or simple fusion strategies, inadvertently leading to a loss of crucial inter-view correlations. In this paper, we propose an innovative multi-view network exclusively based on transformers to address challenges in mammographic image classification. Our approach introduces a novel shifted window-based dynamic attention block, facilitating the effective integration of multi-view information and promoting the coherent transfer of this information between views at the spatial feature map level. Furthermore, we conduct a comprehensive comparative analysis of the performance and effectiveness of transformer-based models under diverse settings, employing the CBIS-DDSM and Vin-Dr Mammo datasets. Our code is publicly available at https://github.com/prithuls/MV-Swin-T.

用于乳腺癌分类的传统深度学习方法主要集中在单视图分析上。然而,在临床实践中,放射科医生会同时检查乳腺 X 光检查中的所有视图,利用这些视图中固有的相关性来有效检测肿瘤。认识到多视图分析的重要性,一些研究引入了独立处理乳腺 X 光检查视图的方法,这些方法或通过不同的卷积分支,或通过简单的融合策略,无意中导致了重要的视图间相关性的丢失。在本文中,我们提出了一种完全基于变换器的创新型多视图网络,以应对乳房X光图像分类中的挑战。我们的方法引入了一种新颖的基于移位窗口的动态注意力块,有助于有效整合多视图信息,并在空间特征图层面促进视图间信息的连贯传递。此外,我们还利用 CBIS-DDSM 和 Vin-Dr Mammo 数据集,对基于变换器的模型在不同环境下的性能和有效性进行了全面的比较分析。我们的代码可在 https://github.com/prithuls/MV-Swin-T 公开获取。
{"title":"MV-Swin-T: MAMMOGRAM CLASSIFICATION WITH MULTI-VIEW SWIN TRANSFORMER.","authors":"Sushmita Sarker, Prithul Sarker, George Bebis, Alireza Tavakkoli","doi":"10.1109/isbi56570.2024.10635578","DOIUrl":"10.1109/isbi56570.2024.10635578","url":null,"abstract":"<p><p>Traditional deep learning approaches for breast cancer classification has predominantly concentrated on single-view analysis. In clinical practice, however, radiologists concurrently examine all views within a mammography exam, leveraging the inherent correlations in these views to effectively detect tumors. Acknowledging the significance of multi-view analysis, some studies have introduced methods that independently process mammogram views, either through distinct convolutional branches or simple fusion strategies, inadvertently leading to a loss of crucial inter-view correlations. In this paper, we propose an innovative multi-view network exclusively based on transformers to address challenges in mammographic image classification. Our approach introduces a novel shifted window-based dynamic attention block, facilitating the effective integration of multi-view information and promoting the coherent transfer of this information between views at the spatial feature map level. Furthermore, we conduct a comprehensive comparative analysis of the performance and effectiveness of transformer-based models under diverse settings, employing the CBIS-DDSM and Vin-Dr Mammo datasets. Our code is publicly available at https://github.com/prithuls/MV-Swin-T.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2024 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11450559/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142382699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RECONSTRUCTING RETINAL VISUAL IMAGES FROM 3T FMRI DATA ENHANCED BY UNSUPERVISED LEARNING. 通过无监督学习从 3t fmri 数据中重建视网膜视觉图像。
Pub Date : 2024-05-01 Epub Date: 2024-08-22 DOI: 10.1109/isbi56570.2024.10635641
Yujian Xiong, Wenhui Zhu, Zhong-Lin Lu, Yalin Wang

The reconstruction of human visual inputs from brain activity, particularly through functional Magnetic Resonance Imaging (fMRI), holds promising avenues for unraveling the mechanisms of the human visual system. Despite the significant strides made by deep learning methods in improving the quality and interpretability of visual reconstruction, there remains a substantial demand for high-quality, long-duration, subject-specific 7-Tesla fMRI experiments. The challenge arises in integrating diverse smaller 3-Tesla datasets or accommodating new subjects with brief and low-quality fMRI scans. In response to these constraints, we propose a novel framework that generates enhanced 3T fMRI data through an unsupervised Generative Adversarial Network (GAN), leveraging unpaired training across two distinct fMRI datasets in 7T and 3T, respectively. This approach aims to overcome the limitations of the scarcity of high-quality 7-Tesla data and the challenges associated with brief and low-quality scans in 3-Tesla experiments. In this paper, we demonstrate the reconstruction capabilities of the enhanced 3T fMRI data, highlighting its proficiency in generating superior input visual images compared to data-intensive methods trained and tested on a single subject.

从大脑活动中重建人类视觉输入,特别是通过功能磁共振成像(fMRI),为揭示人类视觉系统的机制提供了前景广阔的途径。尽管深度学习方法在提高视觉重建的质量和可解释性方面取得了长足进步,但对高质量、长时间、特定对象的 7 特斯拉 fMRI 实验仍有大量需求。在整合各种较小的 3-Tesla 数据集或使用简短和低质量的 fMRI 扫描来适应新受试者方面存在挑战。针对这些限制,我们提出了一个新颖的框架,通过无监督生成对抗网络(GAN),利用分别在 7T 和 3T 两个不同的 fMRI 数据集上进行的非配对训练,生成增强的 3T fMRI 数据。这种方法旨在克服 7T 高质量数据稀缺的局限性,以及 3T 实验中短暂和低质量扫描带来的挑战。在本文中,我们展示了增强型 3T fMRI 数据的重建能力,与在单个受试者身上训练和测试的数据密集型方法相比,该方法在生成卓越的输入视觉图像方面表现突出。
{"title":"RECONSTRUCTING RETINAL VISUAL IMAGES FROM 3T FMRI DATA ENHANCED BY UNSUPERVISED LEARNING.","authors":"Yujian Xiong, Wenhui Zhu, Zhong-Lin Lu, Yalin Wang","doi":"10.1109/isbi56570.2024.10635641","DOIUrl":"10.1109/isbi56570.2024.10635641","url":null,"abstract":"<p><p>The reconstruction of human visual inputs from brain activity, particularly through functional Magnetic Resonance Imaging (fMRI), holds promising avenues for unraveling the mechanisms of the human visual system. Despite the significant strides made by deep learning methods in improving the quality and interpretability of visual reconstruction, there remains a substantial demand for high-quality, long-duration, subject-specific 7-Tesla fMRI experiments. The challenge arises in integrating diverse smaller 3-Tesla datasets or accommodating new subjects with brief and low-quality fMRI scans. In response to these constraints, we propose a novel framework that generates enhanced 3T fMRI data through an unsupervised Generative Adversarial Network (GAN), leveraging unpaired training across two distinct fMRI datasets in 7T and 3T, respectively. This approach aims to overcome the limitations of the scarcity of high-quality 7-Tesla data and the challenges associated with brief and low-quality scans in 3-Tesla experiments. In this paper, we demonstrate the reconstruction capabilities of the enhanced 3T fMRI data, highlighting its proficiency in generating superior input visual images compared to data-intensive methods trained and tested on a single subject.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2024 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11486511/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142482685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A CONVEX COMPRESSIBILITY-INSPIRED UNSUPERVISED LOSS FUNCTION FOR PHYSICS-DRIVEN DEEP LEARNING RECONSTRUCTION.
Pub Date : 2024-05-01 Epub Date: 2024-08-22 DOI: 10.1109/ISBI56570.2024.10635138
Yaşar Utku Alçalar, Merve Gülle, Mehmet Akçakaya

Physics-driven deep learning (PD-DL) methods have gained popularity for improved reconstruction of fast MRI scans. Though supervised learning has been used in early works, there has been a recent interest in unsupervised learning methods for training PD-DL. In this work, we take inspiration from statistical image processing and compressed sensing (CS), and propose a novel convex loss function as an alternative learning strategy. Our loss function evaluates the compressibility of the output image while ensuring data fidelity to assess the quality of reconstruction in versatile settings, including supervised, unsupervised, and zero-shot scenarios. In particular, we leverage the reweighted l 1 norm that has been shown to approximate the l 0 norm for quality evaluation. Results show that the PD-DL networks trained with the proposed loss formulation outperform conventional methods, while maintaining similar quality to PD-DL models trained using existing supervised and unsupervised techniques.

{"title":"A CONVEX COMPRESSIBILITY-INSPIRED UNSUPERVISED LOSS FUNCTION FOR PHYSICS-DRIVEN DEEP LEARNING RECONSTRUCTION.","authors":"Yaşar Utku Alçalar, Merve Gülle, Mehmet Akçakaya","doi":"10.1109/ISBI56570.2024.10635138","DOIUrl":"10.1109/ISBI56570.2024.10635138","url":null,"abstract":"<p><p>Physics-driven deep learning (PD-DL) methods have gained popularity for improved reconstruction of fast MRI scans. Though supervised learning has been used in early works, there has been a recent interest in unsupervised learning methods for training PD-DL. In this work, we take inspiration from statistical image processing and compressed sensing (CS), and propose a novel convex loss function as an alternative learning strategy. Our loss function evaluates the compressibility of the output image while ensuring data fidelity to assess the quality of reconstruction in versatile settings, including supervised, unsupervised, and zero-shot scenarios. In particular, we leverage the reweighted <math> <mrow><msub><mi>l</mi> <mn>1</mn></msub> </mrow> </math> norm that has been shown to approximate the <math> <mrow><msub><mi>l</mi> <mn>0</mn></msub> </mrow> </math> norm for quality evaluation. Results show that the PD-DL networks trained with the proposed loss formulation outperform conventional methods, while maintaining similar quality to PD-DL models trained using existing supervised and unsupervised techniques.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2024 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11779509/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143070254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
UNSUPERVISED AIRWAY TREE CLUSTERING WITH DEEP LEARNING: THE MULTI-ETHNIC STUDY OF ATHEROSCLEROSIS (MESA) LUNG STUDY. 利用深度学习进行无监督气道树聚类:多种族动脉粥样硬化研究(MESA)肺研究。
Pub Date : 2024-05-01 Epub Date: 2024-08-22 DOI: 10.1109/isbi56570.2024.10635651
Sneha N Naik, Elsa D Angelini, R Graham Barr, Norrina Allen, Alain Bertoni, Eric A Hoffman, Ani Manichaikul, Jim Pankow, Wendy Post, Yifei Sun, Karol Watson, Benjamin M Smith, Andrew F Laine

High-resolution full lung CT scans now enable the detailed segmentation of airway trees up to the 6th branching generation. The airway binary masks display very complex tree structures that may encode biological information relevant to disease risk and yet remain challenging to exploit via traditional methods such as meshing or skeletonization. Recent clinical studies suggest that some variations in shape patterns and caliber of the human airway tree are highly associated with adverse health outcomes, including all-cause mortality and incident COPD. However, quantitative characterization of variations observed on CT segmented airway tree remain incomplete, as does our understanding of the clinical and developmental implications of such. In this work, we present an unsupervised deep-learning pipeline for feature extraction and clustering of human airway trees, learned directly from projections of 3D airway segmentations. We identify four reproducible and clinically distinct airway sub-types in the MESA Lung CT cohort.

现在,高分辨率全肺 CT 扫描可对气道树进行详细分割,直至第 6 代分支。气道二元掩模显示了非常复杂的气道树结构,这些结构可能编码了与疾病风险相关的生物信息,但通过网格化或骨架化等传统方法进行利用仍具有挑战性。最近的临床研究表明,人体气道树形状模式和口径的某些变化与不良健康后果(包括全因死亡率和慢性阻塞性肺病)密切相关。然而,在 CT 分段气道树上观察到的变化的定量特征描述仍不完整,我们对其临床和发育影响的理解也是如此。在这项工作中,我们提出了一种用于人类气道树特征提取和聚类的无监督深度学习管道,该管道直接从三维气道分割的投影中学习。我们在 MESA 肺 CT 队列中确定了四种可重复且临床上截然不同的气道亚型。
{"title":"UNSUPERVISED AIRWAY TREE CLUSTERING WITH DEEP LEARNING: THE MULTI-ETHNIC STUDY OF ATHEROSCLEROSIS (MESA) LUNG STUDY.","authors":"Sneha N Naik, Elsa D Angelini, R Graham Barr, Norrina Allen, Alain Bertoni, Eric A Hoffman, Ani Manichaikul, Jim Pankow, Wendy Post, Yifei Sun, Karol Watson, Benjamin M Smith, Andrew F Laine","doi":"10.1109/isbi56570.2024.10635651","DOIUrl":"10.1109/isbi56570.2024.10635651","url":null,"abstract":"<p><p>High-resolution full lung CT scans now enable the detailed segmentation of airway trees up to the 6th branching generation. The airway binary masks display very complex tree structures that may encode biological information relevant to disease risk and yet remain challenging to exploit via traditional methods such as meshing or skeletonization. Recent clinical studies suggest that some variations in shape patterns and caliber of the human airway tree are highly associated with adverse health outcomes, including all-cause mortality and incident COPD. However, quantitative characterization of variations observed on CT segmented airway tree remain incomplete, as does our understanding of the clinical and developmental implications of such. In this work, we present an unsupervised deep-learning pipeline for feature extraction and clustering of human airway trees, learned directly from projections of 3D airway segmentations. We identify four reproducible and clinically distinct airway sub-types in the MESA Lung CT cohort.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2024 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11467912/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142482686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A DEEP LEARNING FRAMEWORK TO CHARACTERIZE NOISY LABELS IN EPILEPTOGENIC ZONE LOCALIZATION USING FUNCTIONAL CONNECTIVITY. 深度学习框架,利用功能连通性描述致痫区定位中的噪声标签。
Pub Date : 2024-05-01 Epub Date: 2024-08-22 DOI: 10.1109/isbi56570.2024.10635583
Naresh Nandakumar, David Hsu, Raheel Ahmed, Archana Venkataraman

Resting-sate fMRI (rs-fMRI) has emerged as a viable tool to localize the epileptogenic zone (EZ) in medication refractory focal epilepsy patients. However, due to clinical protocol, datasets with reliable labels for the EZ are scarce. Some studies have used the entire resection area from post-operative structural T1 scans to act as the ground truth EZ labels during training and testing. These labels are subject to noise, as usually the resection area will be larger than the actual EZ tissue. We develop a mathematical framework for characterizing noisy labels in EZ localization. We use a multi-task deep learning framework to identify both the probability of a noisy label as well as the localization prediction for each ROI. We train our framework on a simulated dataset derived from the Human Connectome Project and evaluate it on both the simulated and a clinical epilepsy dataset. We show superior localization performance in our method against published localization networks on both the real and simulated dataset.

静息态 fMRI(rs-fMRI)已成为药物难治性局灶性癫痫患者定位致痫区(EZ)的可行工具。然而,由于临床协议的限制,具有可靠 EZ 标记的数据集非常稀少。一些研究使用术后结构 T1 扫描中的整个切除区域作为训练和测试期间的 EZ 标签。这些标签会受到噪声的影响,因为切除区域通常比实际的 EZ 组织要大。我们开发了一个数学框架,用于描述 EZ 定位中的噪声标签。我们使用多任务深度学习框架来识别噪声标签的概率以及每个 ROI 的定位预测。我们在源自人类连接组计划的模拟数据集上训练我们的框架,并在模拟数据集和临床癫痫数据集上对其进行评估。在真实数据集和模拟数据集上,我们的方法与已发表的定位网络相比,都显示出更优越的定位性能。
{"title":"A DEEP LEARNING FRAMEWORK TO CHARACTERIZE NOISY LABELS IN EPILEPTOGENIC ZONE LOCALIZATION USING FUNCTIONAL CONNECTIVITY.","authors":"Naresh Nandakumar, David Hsu, Raheel Ahmed, Archana Venkataraman","doi":"10.1109/isbi56570.2024.10635583","DOIUrl":"10.1109/isbi56570.2024.10635583","url":null,"abstract":"<p><p>Resting-sate fMRI (rs-fMRI) has emerged as a viable tool to localize the epileptogenic zone (EZ) in medication refractory focal epilepsy patients. However, due to clinical protocol, datasets with reliable labels for the EZ are scarce. Some studies have used the entire resection area from post-operative structural T1 scans to act as the ground truth EZ labels during training and testing. These labels are subject to noise, as usually the resection area will be larger than the actual EZ tissue. We develop a mathematical framework for characterizing noisy labels in EZ localization. We use a multi-task deep learning framework to identify both the probability of a noisy label as well as the localization prediction for each ROI. We train our framework on a simulated dataset derived from the Human Connectome Project and evaluate it on both the simulated and a clinical epilepsy dataset. We show superior localization performance in our method against published localization networks on both the real and simulated dataset.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2024 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11500830/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142514334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ROBUST QUANTIFICATION OF PERCENT EMPHYSEMA ON CT VIA DOMAIN ATTENTION: THE MULTI-ETHNIC STUDY OF ATHEROSCLEROSIS (MESA) LUNG STUDY. 通过域注意力对 CT 上肺气肿百分比进行稳健量化:动脉粥样硬化多种族研究(MESA)肺研究。
Pub Date : 2024-05-01 Epub Date: 2024-08-22 DOI: 10.1109/isbi56570.2024.10635299
Xuzhe Zhang, Elsa D Angelini, Eric A Hoffman, Karol E Watson, Benjamin M Smith, R Graham Barr, Andrew F Laine

Robust quantification of pulmonary emphysema on computed tomography (CT) remains challenging for large-scale research studies that involve scans from different scanner types and for translation to clinical scans. Although the domain shifts in different CT scanners are subtle compared to shifts existing in other modalities (e.g., MRI) or cross-modality, emphysema is highly sensitive to it. Such subtle difference limits the application of general domain adaptation methods, such as image translation-based methods, as the contrast difference is too subtle to be distinguished. Existing studies have explored several directions to tackle this challenge, including density correction, noise filtering, regression, hidden Markov measure field (HMMF) model-based segmentation, and volume-adjusted lung density. Despite some promising results, previous studies either required a tedious workflow or eliminated opportunities for downstream emphysema subtyping, limiting efficient adaptation on a large-scale study. To alleviate this dilemma, we developed an end-to-end deep learning framework based on an existing HMMF segmentation framework. We first demonstrate that a regular UNet cannot replicate the existing HMMF results because of the lack of scanner priors. We then design a novel domain attention block, a simple yet efficient cross-modal block to fuse image visual features with quantitative scanner priors (a sequence), which significantly improves the results.

计算机断层扫描(CT)上肺气肿的可靠定量对于涉及不同类型扫描仪扫描的大规模研究以及转化为临床扫描仍具有挑战性。虽然与其他模式(如核磁共振成像)或跨模式相比,不同 CT 扫描仪的域偏移是微妙的,但肺气肿对其高度敏感。这种微妙的差异限制了一般域适应方法(如基于图像平移的方法)的应用,因为对比度差异过于微妙,难以区分。现有研究已经探索了多个方向来应对这一挑战,包括密度校正、噪声过滤、回归、基于隐马尔可夫测量场(HMMF)模型的分割以及体积调整肺密度。尽管取得了一些有希望的结果,但之前的研究要么需要繁琐的工作流程,要么消除了下游肺气肿亚型的机会,限制了大规模研究的高效适应性。为了缓解这一困境,我们基于现有的 HMMF 细分框架开发了端到端的深度学习框架。我们首先证明,由于缺乏扫描仪先验,常规的 UNet 无法复制现有的 HMMF 结果。然后,我们设计了一个新颖的领域关注区块,这是一个简单而高效的跨模态区块,用于将图像视觉特征与定量扫描仪前验(序列)融合在一起,从而显著改善了结果。
{"title":"ROBUST QUANTIFICATION OF PERCENT EMPHYSEMA ON CT VIA DOMAIN ATTENTION: THE MULTI-ETHNIC STUDY OF ATHEROSCLEROSIS (MESA) LUNG STUDY.","authors":"Xuzhe Zhang, Elsa D Angelini, Eric A Hoffman, Karol E Watson, Benjamin M Smith, R Graham Barr, Andrew F Laine","doi":"10.1109/isbi56570.2024.10635299","DOIUrl":"https://doi.org/10.1109/isbi56570.2024.10635299","url":null,"abstract":"<p><p>Robust quantification of pulmonary emphysema on computed tomography (CT) remains challenging for large-scale research studies that involve scans from different scanner types and for translation to clinical scans. Although the domain shifts in different CT scanners are subtle compared to shifts existing in other modalities (e.g., MRI) or cross-modality, emphysema is highly sensitive to it. Such subtle difference limits the application of general domain adaptation methods, such as image translation-based methods, as the contrast difference is too subtle to be distinguished. Existing studies have explored several directions to tackle this challenge, including density correction, noise filtering, regression, hidden Markov measure field (HMMF) model-based segmentation, and volume-adjusted lung density. Despite some promising results, previous studies either required a tedious workflow or eliminated opportunities for downstream emphysema subtyping, limiting efficient adaptation on a large-scale study. To alleviate this dilemma, we developed an end-to-end deep learning framework based on an existing HMMF segmentation framework. We first demonstrate that a regular UNet cannot replicate the existing HMMF results because of the lack of scanner priors. We then design a novel domain attention block, a simple yet efficient cross-modal block to fuse image visual features with quantitative scanner priors (a sequence), which significantly improves the results.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2024 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11388062/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142302995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CYCLE-CONSISTENT SELF-SUPERVISED LEARNING FOR IMPROVED HIGHLY-ACCELERATED MRI RECONSTRUCTION. 周期一致的自我监督学习提高高加速mri重建。
Pub Date : 2024-05-01 Epub Date: 2024-08-22 DOI: 10.1109/isbi56570.2024.10635895
Chi Zhang, Omer Burak Demirel, Mehmet Akçakaya

Physics-driven deep learning (PD-DL) has become a powerful tool for accelerated MRI. Recent developments have also developed unsupervised learning for PD-DL, including self-supervised learning. However, at very high acceleration rates, such approaches show performance deterioration. In this study, we propose to use cyclic-consistency (CC) to improve self-supervised learning for highly accelerated MRI. In our proposed CC, simulated measurements are obtained by undersampling the network output using patterns drawn from the same distribution as the true one. The reconstructions of these simulated measurements are obtained using the same network, which are then compared to the acquired data at the true sampling locations. This CC approach is used in conjunction with a masking-based self-supervised loss. Results show that the proposed method can substantially reduce aliasing artifacts at high acceleration rates, including rate 6 and 8 fastMRI knee imaging and 20-fold HCP-style fMRI.

物理驱动的深度学习(PD-DL)已经成为加速MRI的有力工具。近年来PD-DL的无监督学习也得到了发展,包括自监督学习。然而,在非常高的加速速率下,这种方法表现出性能下降。在这项研究中,我们建议使用循环一致性(CC)来改进高加速MRI的自监督学习。在我们提出的CC中,模拟测量是通过使用与真实分布相同的分布绘制的模式对网络输出进行欠采样来获得的。使用相同的网络获得这些模拟测量的重建,然后将其与在真实采样位置获得的数据进行比较。这种CC方法与基于掩蔽的自监督损失结合使用。结果表明,该方法可以在高加速速率下大幅减少混叠伪影,包括速率为6和8的fastMRI膝关节成像和20倍hcp式fMRI。
{"title":"CYCLE-CONSISTENT SELF-SUPERVISED LEARNING FOR IMPROVED HIGHLY-ACCELERATED MRI RECONSTRUCTION.","authors":"Chi Zhang, Omer Burak Demirel, Mehmet Akçakaya","doi":"10.1109/isbi56570.2024.10635895","DOIUrl":"10.1109/isbi56570.2024.10635895","url":null,"abstract":"<p><p>Physics-driven deep learning (PD-DL) has become a powerful tool for accelerated MRI. Recent developments have also developed unsupervised learning for PD-DL, including self-supervised learning. However, at very high acceleration rates, such approaches show performance deterioration. In this study, we propose to use cyclic-consistency (CC) to improve self-supervised learning for highly accelerated MRI. In our proposed CC, simulated measurements are obtained by undersampling the network output using patterns drawn from the same distribution as the true one. The reconstructions of these simulated measurements are obtained using the same network, which are then compared to the acquired data at the true sampling locations. This CC approach is used in conjunction with a masking-based self-supervised loss. Results show that the proposed method can substantially reduce aliasing artifacts at high acceleration rates, including rate 6 and 8 fastMRI knee imaging and 20-fold HCP-style fMRI.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2024 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11736014/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143017995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MAPPING ALZHEIMER'S DISEASE PSEUDO-PROGRESSION WITH MULTIMODAL BIOMARKER TRAJECTORY EMBEDDINGS. 利用多模态生物标记物轨迹嵌入绘制阿尔茨海默病假性进展图。
Pub Date : 2024-05-01 Epub Date: 2024-08-22 DOI: 10.1109/isbi56570.2024.10635249
Lina Takemaru, Shu Yang, Ruiming Wu, Bing He, Christos Davtzikos, Jingwen Yan, Li Shen

Alzheimer's Disease (AD) is a neurodegenerative disorder characterized by progressive cognitive degeneration and motor impairment, affecting millions worldwide. Mapping the progression of AD is crucial for early detection of loss of brain function, timely intervention, and development of effective treatments. However, accurate measurements of disease progression are still challenging at present. This study presents a novel approach to understanding the heterogeneous pathways of AD through longitudinal biomarker data from medical imaging and other modalities. We propose an analytical pipeline adopting two popular machine learning methods from the single-cell transcriptomics domain, PHATE and Slingshot, to project multimodal biomarker trajectories to a low-dimensional space. These embeddings serve as our pseudotime estimates. We applied this pipeline to the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset to align longitudinal data across individuals at various disease stages. Our approach mirrors the technique used to cluster single-cell data into cell types based on developmental timelines. Our pseudotime estimates revealed distinct patterns of disease evolution and biomarker changes over time, providing a deeper understanding of the temporal dynamics of AD. The results show the potential of the approach in the clinical domain of neurodegenerative diseases, enabling more precise disease modeling and early diagnosis.

阿尔茨海默病(AD)是一种神经退行性疾病,以进行性认知退化和运动障碍为特征,影响着全球数百万人。绘制阿尔茨海默病的进展图对于早期发现大脑功能丧失、及时干预和开发有效的治疗方法至关重要。然而,目前对疾病进展的精确测量仍具有挑战性。本研究提出了一种新方法,通过医学影像和其他模式的纵向生物标记物数据来了解注意力缺失症的异质性途径。我们提出了一种分析管道,采用单细胞转录组学领域两种流行的机器学习方法 PHATE 和 Slingshot,将多模态生物标记物轨迹投射到低维空间。这些嵌入作为我们的伪时间估计。我们将这一管道应用于阿尔茨海默病神经影像倡议(ADNI)数据集,对处于不同疾病阶段的个体的纵向数据进行对齐。我们的方法与根据发育时间表将单细胞数据聚类为细胞类型的技术如出一辙。我们的伪时间估算揭示了疾病演变和生物标志物随时间变化的独特模式,为深入了解 AD 的时间动态提供了依据。研究结果表明,这种方法在神经退行性疾病的临床领域具有潜力,可以实现更精确的疾病建模和早期诊断。
{"title":"MAPPING ALZHEIMER'S DISEASE PSEUDO-PROGRESSION WITH MULTIMODAL BIOMARKER TRAJECTORY EMBEDDINGS.","authors":"Lina Takemaru, Shu Yang, Ruiming Wu, Bing He, Christos Davtzikos, Jingwen Yan, Li Shen","doi":"10.1109/isbi56570.2024.10635249","DOIUrl":"10.1109/isbi56570.2024.10635249","url":null,"abstract":"<p><p>Alzheimer's Disease (AD) is a neurodegenerative disorder characterized by progressive cognitive degeneration and motor impairment, affecting millions worldwide. Mapping the progression of AD is crucial for early detection of loss of brain function, timely intervention, and development of effective treatments. However, accurate measurements of disease progression are still challenging at present. This study presents a novel approach to understanding the heterogeneous pathways of AD through longitudinal biomarker data from medical imaging and other modalities. We propose an analytical pipeline adopting two popular machine learning methods from the single-cell transcriptomics domain, PHATE and Slingshot, to project multimodal biomarker trajectories to a low-dimensional space. These embeddings serve as our pseudotime estimates. We applied this pipeline to the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset to align longitudinal data across individuals at various disease stages. Our approach mirrors the technique used to cluster single-cell data into cell types based on developmental timelines. Our pseudotime estimates revealed distinct patterns of disease evolution and biomarker changes over time, providing a deeper understanding of the temporal dynamics of AD. The results show the potential of the approach in the clinical domain of neurodegenerative diseases, enabling more precise disease modeling and early diagnosis.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2024 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11452153/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142382696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Proceedings. IEEE International Symposium on Biomedical Imaging
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1