首页 > 最新文献

Proceedings. IEEE International Symposium on Biomedical Imaging最新文献

英文 中文
MITIGATING OVER-SATURATED FLUORESCENCE IMAGES THROUGH A SEMI-SUPERVISED GENERATIVE ADVERSARIAL NETWORK. 通过半监督生成对抗网络缓解过饱和荧光图像。
Pub Date : 2024-05-01 Epub Date: 2024-08-22 DOI: 10.1109/isbi56570.2024.10635687
Shunxing Bao, Junlin Guo, Ho Hin Lee, Ruining Deng, Can Cui, Lucas W Remedios, Quan Liu, Qi Yang, Kaiwen Xu, Xin Yu, Jia Li, Yike Li, Joseph T Roland, Qi Liu, Ken S Lau, Keith T Wilson, Lori A Coburn, Bennett A Landman, Yuankai Huo

Multiplex immunofluorescence (MxIF) imaging is a critical tool in biomedical research, offering detailed insights into cell composition and spatial context. As an example, DAPI staining identifies cell nuclei, while CD20 staining helps segment cell membranes in MxIF. However, a persistent challenge in MxIF is saturation artifacts, which hinder single-cell level analysis in areas with over-saturated pixels. Traditional gamma correction methods for fixing saturation are limited, often incorrectly assuming uniform distribution of saturation, which is rarely the case in practice. This paper introduces a novel approach to correct saturation artifacts from a data-driven perspective. We introduce a two-stage, high-resolution hybrid generative adversarial network (HDmixGAN), which merges unpaired (CycleGAN) and paired (pix2pixHD) network architectures. This approach is designed to capitalize on the available small-scale paired data and the more extensive unpaired data from costly MxIF data. Specifically, we generate pseudo-paired data from large-scale unpaired over-saturated datasets with a CycleGAN, and train a Pix2pixGAN using both small-scale real and large-scale synthetic data derived from multiple DAPI staining rounds in MxIF. This method was validated against various baselines in a downstream nuclei detection task, improving the F1 score by 6% over the baseline. This is, to our knowledge, the first focused effort to address multi-round saturation in MxIF images, offering a specialized solution for enhancing cell analysis accuracy through improved image quality. The source code and implementation of the proposed method are available at https://github.com/MASILab/DAPIArtifactRemoval.git.

多重免疫荧光(MxIF)成像是生物医学研究的重要工具,它能让人详细了解细胞的组成和空间环境。例如,DAPI 染色可识别细胞核,而 CD20 染色则有助于在 MxIF 中分割细胞膜。然而,饱和伪影是 MxIF 一直面临的挑战,它阻碍了对像素过度饱和区域进行单细胞级分析。传统的伽玛校正方法在修复饱和度方面存在局限性,常常错误地假设饱和度均匀分布,而实际情况却很少如此。本文从数据驱动的角度出发,介绍了一种修正饱和度伪影的新方法。我们介绍了一种两阶段高分辨率混合生成对抗网络 (HDmixGAN),它融合了非配对(CycleGAN)和配对(pix2pixHD)网络架构。这种方法旨在利用现有的小规模配对数据和来自昂贵的 MxIF 数据的更广泛的非配对数据。具体来说,我们利用 CycleGAN 从大规模非配对过饱和数据集生成伪配对数据,并利用 MxIF 中多轮 DAPI 染色得到的小规模真实数据和大规模合成数据训练 Pix2pixGAN。这种方法在下游细胞核检测任务中与各种基线方法进行了对比验证,比基线方法提高了 6% 的 F1 分数。据我们所知,这是首次集中解决 MxIF 图像中的多轮饱和问题,为通过提高图像质量来增强细胞分析准确性提供了专门的解决方案。建议方法的源代码和实现方法可在 https://github.com/MASILab/DAPIArtifactRemoval.git 上获取。
{"title":"MITIGATING OVER-SATURATED FLUORESCENCE IMAGES THROUGH A SEMI-SUPERVISED GENERATIVE ADVERSARIAL NETWORK.","authors":"Shunxing Bao, Junlin Guo, Ho Hin Lee, Ruining Deng, Can Cui, Lucas W Remedios, Quan Liu, Qi Yang, Kaiwen Xu, Xin Yu, Jia Li, Yike Li, Joseph T Roland, Qi Liu, Ken S Lau, Keith T Wilson, Lori A Coburn, Bennett A Landman, Yuankai Huo","doi":"10.1109/isbi56570.2024.10635687","DOIUrl":"10.1109/isbi56570.2024.10635687","url":null,"abstract":"<p><p>Multiplex immunofluorescence (MxIF) imaging is a critical tool in biomedical research, offering detailed insights into cell composition and spatial context. As an example, DAPI staining identifies cell nuclei, while CD20 staining helps segment cell membranes in MxIF. However, a persistent challenge in MxIF is saturation artifacts, which hinder single-cell level analysis in areas with over-saturated pixels. Traditional gamma correction methods for fixing saturation are limited, often incorrectly assuming uniform distribution of saturation, which is rarely the case in practice. This paper introduces a novel approach to correct saturation artifacts from a data-driven perspective. We introduce a two-stage, high-resolution hybrid generative adversarial network (HDmixGAN), which merges unpaired (CycleGAN) and paired (pix2pixHD) network architectures. This approach is designed to capitalize on the available small-scale paired data and the more extensive unpaired data from costly MxIF data. Specifically, we generate pseudo-paired data from large-scale unpaired over-saturated datasets with a CycleGAN, and train a Pix2pixGAN using both small-scale real and large-scale synthetic data derived from multiple DAPI staining rounds in MxIF. This method was validated against various baselines in a downstream nuclei detection task, improving the F1 score by 6% over the baseline. This is, to our knowledge, the first focused effort to address multi-round saturation in MxIF images, offering a specialized solution for enhancing cell analysis accuracy through improved image quality. The source code and implementation of the proposed method are available at https://github.com/MASILab/DAPIArtifactRemoval.git.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2024 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11756911/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143049003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
QUANTIFYING WHITE MATTER HYPERINTENSITY AND BRAIN VOLUMES IN HETEROGENEOUS CLINICAL AND LOW-FIELD PORTABLE MRI. 在非均匀临床和低场便携式mri中量化白质高强度和脑容量。
Pub Date : 2024-05-01 Epub Date: 2024-08-22 DOI: 10.1109/isbi56570.2024.10635502
Pablo Laso, Stefano Cerri, Annabel Sorby-Adams, Jennifer Guo, Farrah Mateen, Philipp Goebl, Jiaming Wu, Peirong Liu, Hongwei Li, Sean I Young, Benjamin Billot, Oula Puonti, Gordon Sze, Sam Payabavash, Adam DeHavenon, Kevin N Sheth, Matthew S Rosen, John Kirsch, Nicola Strisciuglio, Jelmer M Wolterink, Arman Eshaghi, Frederik Barkhof, W Taylor Kimberly, Juan Eugenio Iglesias

Brain atrophy and white matter hyperintensity (WMH) are critical neuroimaging features for ascertaining brain injury in cerebrovascular disease and multiple sclerosis. Automated segmentation and quantification is desirable but existing methods require high-resolution MRI with good signal-to-noise ratio (SNR). This precludes application to clinical and low-field portable MRI (pMRI) scans, thus hampering large-scale tracking of atrophy and WMH progression, especially in underserved areas where pMRI has huge potential. Here we present a method that segments white matter hyperintensity and 36 brain regions from scans of any resolution and contrast (including pMRI) without retraining. We show results on eight public datasets and on a private dataset with paired high- and low-field scans (3T and 64mT), where we attain strong correlation between the WMH (ρ=.85) and hippocampal volumes (ρ=.89) estimated at both fields. Our method is publicly available as part of FreeSurfer, at: http://surfer.nmr.mgh.harvard.edu/fswiki/WMH-SynthSeg.

脑萎缩和白质高强度(WMH)是确定脑血管疾病和多发性硬化症脑损伤的关键神经影像学特征。自动分割和量化是需要的,但现有的方法需要高分辨率的MRI和良好的信噪比(SNR)。这阻碍了临床和低场便携式MRI (pMRI)扫描的应用,从而阻碍了对萎缩和WMH进展的大规模跟踪,特别是在pMRI潜力巨大的服务不足地区。在这里,我们提出了一种无需再训练的方法,从任何分辨率和对比度(包括pMRI)的扫描中分割白质高强度和36个大脑区域。我们在8个公共数据集和一个私人数据集上展示了结果,这些数据集具有配对的高场和低场扫描(3T和64mT),其中我们获得了两个领域估计的WMH (ρ= 0.85)和海马体积(ρ= 0.89)之间的强相关性。作为FreeSurfer的一部分,我们的方法是公开的,网址是:http://surfer.nmr.mgh.harvard.edu/fswiki/WMH-SynthSeg。
{"title":"QUANTIFYING WHITE MATTER HYPERINTENSITY AND BRAIN VOLUMES IN HETEROGENEOUS CLINICAL AND LOW-FIELD PORTABLE MRI.","authors":"Pablo Laso, Stefano Cerri, Annabel Sorby-Adams, Jennifer Guo, Farrah Mateen, Philipp Goebl, Jiaming Wu, Peirong Liu, Hongwei Li, Sean I Young, Benjamin Billot, Oula Puonti, Gordon Sze, Sam Payabavash, Adam DeHavenon, Kevin N Sheth, Matthew S Rosen, John Kirsch, Nicola Strisciuglio, Jelmer M Wolterink, Arman Eshaghi, Frederik Barkhof, W Taylor Kimberly, Juan Eugenio Iglesias","doi":"10.1109/isbi56570.2024.10635502","DOIUrl":"https://doi.org/10.1109/isbi56570.2024.10635502","url":null,"abstract":"<p><p>Brain atrophy and white matter hyperintensity (WMH) are critical neuroimaging features for ascertaining brain injury in cerebrovascular disease and multiple sclerosis. Automated segmentation and quantification is desirable but existing methods require high-resolution MRI with good signal-to-noise ratio (SNR). This precludes application to clinical and low-field portable MRI (pMRI) scans, thus hampering large-scale tracking of atrophy and WMH progression, especially in underserved areas where pMRI has huge potential. Here we present a method that segments white matter hyperintensity and 36 brain regions from scans of any resolution and contrast (including pMRI) without retraining. We show results on eight public datasets and on a private dataset with paired high- and low-field scans (3T and 64mT), where we attain strong correlation between the WMH (<i>ρ</i>=.85) and hippocampal volumes (<i>ρ</i>=.89) estimated at both fields. Our method is publicly available as part of FreeSurfer, at: http://surfer.nmr.mgh.harvard.edu/fswiki/WMH-SynthSeg.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2024 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12369672/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144981872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MV-Swin-T: MAMMOGRAM CLASSIFICATION WITH MULTI-VIEW SWIN TRANSFORMER. MV-Swin-T:利用多视角斯温变换器进行乳房 X 射线图分类。
Pub Date : 2024-05-01 Epub Date: 2024-08-22 DOI: 10.1109/isbi56570.2024.10635578
Sushmita Sarker, Prithul Sarker, George Bebis, Alireza Tavakkoli

Traditional deep learning approaches for breast cancer classification has predominantly concentrated on single-view analysis. In clinical practice, however, radiologists concurrently examine all views within a mammography exam, leveraging the inherent correlations in these views to effectively detect tumors. Acknowledging the significance of multi-view analysis, some studies have introduced methods that independently process mammogram views, either through distinct convolutional branches or simple fusion strategies, inadvertently leading to a loss of crucial inter-view correlations. In this paper, we propose an innovative multi-view network exclusively based on transformers to address challenges in mammographic image classification. Our approach introduces a novel shifted window-based dynamic attention block, facilitating the effective integration of multi-view information and promoting the coherent transfer of this information between views at the spatial feature map level. Furthermore, we conduct a comprehensive comparative analysis of the performance and effectiveness of transformer-based models under diverse settings, employing the CBIS-DDSM and Vin-Dr Mammo datasets. Our code is publicly available at https://github.com/prithuls/MV-Swin-T.

用于乳腺癌分类的传统深度学习方法主要集中在单视图分析上。然而,在临床实践中,放射科医生会同时检查乳腺 X 光检查中的所有视图,利用这些视图中固有的相关性来有效检测肿瘤。认识到多视图分析的重要性,一些研究引入了独立处理乳腺 X 光检查视图的方法,这些方法或通过不同的卷积分支,或通过简单的融合策略,无意中导致了重要的视图间相关性的丢失。在本文中,我们提出了一种完全基于变换器的创新型多视图网络,以应对乳房X光图像分类中的挑战。我们的方法引入了一种新颖的基于移位窗口的动态注意力块,有助于有效整合多视图信息,并在空间特征图层面促进视图间信息的连贯传递。此外,我们还利用 CBIS-DDSM 和 Vin-Dr Mammo 数据集,对基于变换器的模型在不同环境下的性能和有效性进行了全面的比较分析。我们的代码可在 https://github.com/prithuls/MV-Swin-T 公开获取。
{"title":"MV-Swin-T: MAMMOGRAM CLASSIFICATION WITH MULTI-VIEW SWIN TRANSFORMER.","authors":"Sushmita Sarker, Prithul Sarker, George Bebis, Alireza Tavakkoli","doi":"10.1109/isbi56570.2024.10635578","DOIUrl":"10.1109/isbi56570.2024.10635578","url":null,"abstract":"<p><p>Traditional deep learning approaches for breast cancer classification has predominantly concentrated on single-view analysis. In clinical practice, however, radiologists concurrently examine all views within a mammography exam, leveraging the inherent correlations in these views to effectively detect tumors. Acknowledging the significance of multi-view analysis, some studies have introduced methods that independently process mammogram views, either through distinct convolutional branches or simple fusion strategies, inadvertently leading to a loss of crucial inter-view correlations. In this paper, we propose an innovative multi-view network exclusively based on transformers to address challenges in mammographic image classification. Our approach introduces a novel shifted window-based dynamic attention block, facilitating the effective integration of multi-view information and promoting the coherent transfer of this information between views at the spatial feature map level. Furthermore, we conduct a comprehensive comparative analysis of the performance and effectiveness of transformer-based models under diverse settings, employing the CBIS-DDSM and Vin-Dr Mammo datasets. Our code is publicly available at https://github.com/prithuls/MV-Swin-T.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2024 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11450559/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142382699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RECONSTRUCTING RETINAL VISUAL IMAGES FROM 3T FMRI DATA ENHANCED BY UNSUPERVISED LEARNING. 通过无监督学习从 3t fmri 数据中重建视网膜视觉图像。
Pub Date : 2024-05-01 Epub Date: 2024-08-22 DOI: 10.1109/isbi56570.2024.10635641
Yujian Xiong, Wenhui Zhu, Zhong-Lin Lu, Yalin Wang

The reconstruction of human visual inputs from brain activity, particularly through functional Magnetic Resonance Imaging (fMRI), holds promising avenues for unraveling the mechanisms of the human visual system. Despite the significant strides made by deep learning methods in improving the quality and interpretability of visual reconstruction, there remains a substantial demand for high-quality, long-duration, subject-specific 7-Tesla fMRI experiments. The challenge arises in integrating diverse smaller 3-Tesla datasets or accommodating new subjects with brief and low-quality fMRI scans. In response to these constraints, we propose a novel framework that generates enhanced 3T fMRI data through an unsupervised Generative Adversarial Network (GAN), leveraging unpaired training across two distinct fMRI datasets in 7T and 3T, respectively. This approach aims to overcome the limitations of the scarcity of high-quality 7-Tesla data and the challenges associated with brief and low-quality scans in 3-Tesla experiments. In this paper, we demonstrate the reconstruction capabilities of the enhanced 3T fMRI data, highlighting its proficiency in generating superior input visual images compared to data-intensive methods trained and tested on a single subject.

从大脑活动中重建人类视觉输入,特别是通过功能磁共振成像(fMRI),为揭示人类视觉系统的机制提供了前景广阔的途径。尽管深度学习方法在提高视觉重建的质量和可解释性方面取得了长足进步,但对高质量、长时间、特定对象的 7 特斯拉 fMRI 实验仍有大量需求。在整合各种较小的 3-Tesla 数据集或使用简短和低质量的 fMRI 扫描来适应新受试者方面存在挑战。针对这些限制,我们提出了一个新颖的框架,通过无监督生成对抗网络(GAN),利用分别在 7T 和 3T 两个不同的 fMRI 数据集上进行的非配对训练,生成增强的 3T fMRI 数据。这种方法旨在克服 7T 高质量数据稀缺的局限性,以及 3T 实验中短暂和低质量扫描带来的挑战。在本文中,我们展示了增强型 3T fMRI 数据的重建能力,与在单个受试者身上训练和测试的数据密集型方法相比,该方法在生成卓越的输入视觉图像方面表现突出。
{"title":"RECONSTRUCTING RETINAL VISUAL IMAGES FROM 3T FMRI DATA ENHANCED BY UNSUPERVISED LEARNING.","authors":"Yujian Xiong, Wenhui Zhu, Zhong-Lin Lu, Yalin Wang","doi":"10.1109/isbi56570.2024.10635641","DOIUrl":"10.1109/isbi56570.2024.10635641","url":null,"abstract":"<p><p>The reconstruction of human visual inputs from brain activity, particularly through functional Magnetic Resonance Imaging (fMRI), holds promising avenues for unraveling the mechanisms of the human visual system. Despite the significant strides made by deep learning methods in improving the quality and interpretability of visual reconstruction, there remains a substantial demand for high-quality, long-duration, subject-specific 7-Tesla fMRI experiments. The challenge arises in integrating diverse smaller 3-Tesla datasets or accommodating new subjects with brief and low-quality fMRI scans. In response to these constraints, we propose a novel framework that generates enhanced 3T fMRI data through an unsupervised Generative Adversarial Network (GAN), leveraging unpaired training across two distinct fMRI datasets in 7T and 3T, respectively. This approach aims to overcome the limitations of the scarcity of high-quality 7-Tesla data and the challenges associated with brief and low-quality scans in 3-Tesla experiments. In this paper, we demonstrate the reconstruction capabilities of the enhanced 3T fMRI data, highlighting its proficiency in generating superior input visual images compared to data-intensive methods trained and tested on a single subject.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2024 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11486511/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142482685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DIFFERENTIABLE VQ-VAE'S FOR ROBUST WHITE MATTER STREAMLINE ENCODINGS. 稳健白质流线编码的可微分vq-vae。
Pub Date : 2024-05-01 Epub Date: 2024-08-22 DOI: 10.1109/isbi56570.2024.10635543
Andrew Lizarraga, Brandon Taraku, Edouardo Honig, Ying Nian Wu, Shantanu H Joshi

Given the complex geometry of white matter streamlines, Autoencoders have been proposed as a dimension-reduction tool to simplify the analysis streamlines in a low-dimensional latent spaces. However, despite these recent successes, the majority of encoder architectures only perform dimension reduction on single streamlines as opposed to a full bundle of streamlines. This is a severe limitation of the encoder architecture that completely disregards the global geometric structure of streamlines at the expense of individual fibers. Moreover, the latent space may not be well structured which leads to doubt into their interpretability. In this paper we propose a novel Differentiable Vector Quantized Variational Autoencoder, which are engineered to ingest entire bundles of streamlines as single data-point and provides reliable trustworthy encodings that can then be later used to analyze streamlines in the latent space. Comparisons with several state of the art Autoencoders demonstrate superior performance in both encoding and synthesis.

考虑到白质流线的复杂几何形状,自动编码器被提出作为一种降维工具来简化低维潜在空间中的分析流线。然而,尽管最近取得了这些成功,大多数编码器架构只在单个流线上执行降维,而不是在整个流线束上执行降维。这是编码器架构的一个严重限制,它完全忽略了流线的整体几何结构,而牺牲了单个光纤。此外,潜在空间可能没有很好地组织,从而导致对其可解释性的怀疑。在本文中,我们提出了一种新的可微矢量量化变分自编码器,它被设计成将整个流线束作为单个数据点,并提供可靠的可信编码,然后可以用于分析潜在空间中的流线。与几个国家的最先进的自编码器的比较显示优越的性能在编码和合成。
{"title":"DIFFERENTIABLE VQ-VAE'S FOR ROBUST WHITE MATTER STREAMLINE ENCODINGS.","authors":"Andrew Lizarraga, Brandon Taraku, Edouardo Honig, Ying Nian Wu, Shantanu H Joshi","doi":"10.1109/isbi56570.2024.10635543","DOIUrl":"10.1109/isbi56570.2024.10635543","url":null,"abstract":"<p><p>Given the complex geometry of white matter streamlines, Autoencoders have been proposed as a dimension-reduction tool to simplify the analysis streamlines in a low-dimensional latent spaces. However, despite these recent successes, the majority of encoder architectures only perform dimension reduction on single streamlines as opposed to a full bundle of streamlines. This is a severe limitation of the encoder architecture that completely disregards the global geometric structure of streamlines at the expense of individual fibers. Moreover, the latent space may not be well structured which leads to doubt into their interpretability. In this paper we propose a novel Differentiable Vector Quantized Variational Autoencoder, which are engineered to ingest entire bundles of streamlines as single data-point and provides reliable trustworthy encodings that can then be later used to analyze streamlines in the latent space. Comparisons with several state of the art Autoencoders demonstrate superior performance in both encoding and synthesis.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2024 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11968768/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143797380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A DEEP LEARNING FRAMEWORK TO CHARACTERIZE NOISY LABELS IN EPILEPTOGENIC ZONE LOCALIZATION USING FUNCTIONAL CONNECTIVITY. 深度学习框架,利用功能连通性描述致痫区定位中的噪声标签。
Pub Date : 2024-05-01 Epub Date: 2024-08-22 DOI: 10.1109/isbi56570.2024.10635583
Naresh Nandakumar, David Hsu, Raheel Ahmed, Archana Venkataraman

Resting-sate fMRI (rs-fMRI) has emerged as a viable tool to localize the epileptogenic zone (EZ) in medication refractory focal epilepsy patients. However, due to clinical protocol, datasets with reliable labels for the EZ are scarce. Some studies have used the entire resection area from post-operative structural T1 scans to act as the ground truth EZ labels during training and testing. These labels are subject to noise, as usually the resection area will be larger than the actual EZ tissue. We develop a mathematical framework for characterizing noisy labels in EZ localization. We use a multi-task deep learning framework to identify both the probability of a noisy label as well as the localization prediction for each ROI. We train our framework on a simulated dataset derived from the Human Connectome Project and evaluate it on both the simulated and a clinical epilepsy dataset. We show superior localization performance in our method against published localization networks on both the real and simulated dataset.

静息态 fMRI(rs-fMRI)已成为药物难治性局灶性癫痫患者定位致痫区(EZ)的可行工具。然而,由于临床协议的限制,具有可靠 EZ 标记的数据集非常稀少。一些研究使用术后结构 T1 扫描中的整个切除区域作为训练和测试期间的 EZ 标签。这些标签会受到噪声的影响,因为切除区域通常比实际的 EZ 组织要大。我们开发了一个数学框架,用于描述 EZ 定位中的噪声标签。我们使用多任务深度学习框架来识别噪声标签的概率以及每个 ROI 的定位预测。我们在源自人类连接组计划的模拟数据集上训练我们的框架,并在模拟数据集和临床癫痫数据集上对其进行评估。在真实数据集和模拟数据集上,我们的方法与已发表的定位网络相比,都显示出更优越的定位性能。
{"title":"A DEEP LEARNING FRAMEWORK TO CHARACTERIZE NOISY LABELS IN EPILEPTOGENIC ZONE LOCALIZATION USING FUNCTIONAL CONNECTIVITY.","authors":"Naresh Nandakumar, David Hsu, Raheel Ahmed, Archana Venkataraman","doi":"10.1109/isbi56570.2024.10635583","DOIUrl":"10.1109/isbi56570.2024.10635583","url":null,"abstract":"<p><p>Resting-sate fMRI (rs-fMRI) has emerged as a viable tool to localize the epileptogenic zone (EZ) in medication refractory focal epilepsy patients. However, due to clinical protocol, datasets with reliable labels for the EZ are scarce. Some studies have used the entire resection area from post-operative structural T1 scans to act as the ground truth EZ labels during training and testing. These labels are subject to noise, as usually the resection area will be larger than the actual EZ tissue. We develop a mathematical framework for characterizing noisy labels in EZ localization. We use a multi-task deep learning framework to identify both the probability of a noisy label as well as the localization prediction for each ROI. We train our framework on a simulated dataset derived from the Human Connectome Project and evaluate it on both the simulated and a clinical epilepsy dataset. We show superior localization performance in our method against published localization networks on both the real and simulated dataset.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2024 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11500830/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142514334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
UNSUPERVISED AIRWAY TREE CLUSTERING WITH DEEP LEARNING: THE MULTI-ETHNIC STUDY OF ATHEROSCLEROSIS (MESA) LUNG STUDY. 利用深度学习进行无监督气道树聚类:多种族动脉粥样硬化研究(MESA)肺研究。
Pub Date : 2024-05-01 Epub Date: 2024-08-22 DOI: 10.1109/isbi56570.2024.10635651
Sneha N Naik, Elsa D Angelini, R Graham Barr, Norrina Allen, Alain Bertoni, Eric A Hoffman, Ani Manichaikul, Jim Pankow, Wendy Post, Yifei Sun, Karol Watson, Benjamin M Smith, Andrew F Laine

High-resolution full lung CT scans now enable the detailed segmentation of airway trees up to the 6th branching generation. The airway binary masks display very complex tree structures that may encode biological information relevant to disease risk and yet remain challenging to exploit via traditional methods such as meshing or skeletonization. Recent clinical studies suggest that some variations in shape patterns and caliber of the human airway tree are highly associated with adverse health outcomes, including all-cause mortality and incident COPD. However, quantitative characterization of variations observed on CT segmented airway tree remain incomplete, as does our understanding of the clinical and developmental implications of such. In this work, we present an unsupervised deep-learning pipeline for feature extraction and clustering of human airway trees, learned directly from projections of 3D airway segmentations. We identify four reproducible and clinically distinct airway sub-types in the MESA Lung CT cohort.

现在,高分辨率全肺 CT 扫描可对气道树进行详细分割,直至第 6 代分支。气道二元掩模显示了非常复杂的气道树结构,这些结构可能编码了与疾病风险相关的生物信息,但通过网格化或骨架化等传统方法进行利用仍具有挑战性。最近的临床研究表明,人体气道树形状模式和口径的某些变化与不良健康后果(包括全因死亡率和慢性阻塞性肺病)密切相关。然而,在 CT 分段气道树上观察到的变化的定量特征描述仍不完整,我们对其临床和发育影响的理解也是如此。在这项工作中,我们提出了一种用于人类气道树特征提取和聚类的无监督深度学习管道,该管道直接从三维气道分割的投影中学习。我们在 MESA 肺 CT 队列中确定了四种可重复且临床上截然不同的气道亚型。
{"title":"UNSUPERVISED AIRWAY TREE CLUSTERING WITH DEEP LEARNING: THE MULTI-ETHNIC STUDY OF ATHEROSCLEROSIS (MESA) LUNG STUDY.","authors":"Sneha N Naik, Elsa D Angelini, R Graham Barr, Norrina Allen, Alain Bertoni, Eric A Hoffman, Ani Manichaikul, Jim Pankow, Wendy Post, Yifei Sun, Karol Watson, Benjamin M Smith, Andrew F Laine","doi":"10.1109/isbi56570.2024.10635651","DOIUrl":"10.1109/isbi56570.2024.10635651","url":null,"abstract":"<p><p>High-resolution full lung CT scans now enable the detailed segmentation of airway trees up to the 6th branching generation. The airway binary masks display very complex tree structures that may encode biological information relevant to disease risk and yet remain challenging to exploit via traditional methods such as meshing or skeletonization. Recent clinical studies suggest that some variations in shape patterns and caliber of the human airway tree are highly associated with adverse health outcomes, including all-cause mortality and incident COPD. However, quantitative characterization of variations observed on CT segmented airway tree remain incomplete, as does our understanding of the clinical and developmental implications of such. In this work, we present an unsupervised deep-learning pipeline for feature extraction and clustering of human airway trees, learned directly from projections of 3D airway segmentations. We identify four reproducible and clinically distinct airway sub-types in the MESA Lung CT cohort.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2024 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11467912/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142482686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A CONVEX COMPRESSIBILITY-INSPIRED UNSUPERVISED LOSS FUNCTION FOR PHYSICS-DRIVEN DEEP LEARNING RECONSTRUCTION. 用于物理驱动深度学习重建的凸压缩启发的无监督损失函数。
Pub Date : 2024-05-01 Epub Date: 2024-08-22 DOI: 10.1109/ISBI56570.2024.10635138
Yaşar Utku Alçalar, Merve Gülle, Mehmet Akçakaya

Physics-driven deep learning (PD-DL) methods have gained popularity for improved reconstruction of fast MRI scans. Though supervised learning has been used in early works, there has been a recent interest in unsupervised learning methods for training PD-DL. In this work, we take inspiration from statistical image processing and compressed sensing (CS), and propose a novel convex loss function as an alternative learning strategy. Our loss function evaluates the compressibility of the output image while ensuring data fidelity to assess the quality of reconstruction in versatile settings, including supervised, unsupervised, and zero-shot scenarios. In particular, we leverage the reweighted l 1 norm that has been shown to approximate the l 0 norm for quality evaluation. Results show that the PD-DL networks trained with the proposed loss formulation outperform conventional methods, while maintaining similar quality to PD-DL models trained using existing supervised and unsupervised techniques.

物理驱动的深度学习(PD-DL)方法因改善快速MRI扫描的重建而受到欢迎。虽然监督学习已经在早期的工作中使用,但最近对PD-DL训练的无监督学习方法产生了兴趣。在这项工作中,我们从统计图像处理和压缩感知(CS)中获得灵感,并提出了一种新的凸损失函数作为替代学习策略。我们的损失函数评估输出图像的可压缩性,同时确保数据保真度,以评估多种设置下的重建质量,包括监督、无监督和零拍摄场景。特别是,我们利用重新加权的1.1规范,该规范已被证明近似于质量评估的1.0规范。结果表明,使用所提出的损失公式训练的PD-DL网络优于传统方法,同时保持与使用现有监督和无监督技术训练的PD-DL模型相似的质量。
{"title":"A CONVEX COMPRESSIBILITY-INSPIRED UNSUPERVISED LOSS FUNCTION FOR PHYSICS-DRIVEN DEEP LEARNING RECONSTRUCTION.","authors":"Yaşar Utku Alçalar, Merve Gülle, Mehmet Akçakaya","doi":"10.1109/ISBI56570.2024.10635138","DOIUrl":"10.1109/ISBI56570.2024.10635138","url":null,"abstract":"<p><p>Physics-driven deep learning (PD-DL) methods have gained popularity for improved reconstruction of fast MRI scans. Though supervised learning has been used in early works, there has been a recent interest in unsupervised learning methods for training PD-DL. In this work, we take inspiration from statistical image processing and compressed sensing (CS), and propose a novel convex loss function as an alternative learning strategy. Our loss function evaluates the compressibility of the output image while ensuring data fidelity to assess the quality of reconstruction in versatile settings, including supervised, unsupervised, and zero-shot scenarios. In particular, we leverage the reweighted <math> <mrow><msub><mi>l</mi> <mn>1</mn></msub> </mrow> </math> norm that has been shown to approximate the <math> <mrow><msub><mi>l</mi> <mn>0</mn></msub> </mrow> </math> norm for quality evaluation. Results show that the PD-DL networks trained with the proposed loss formulation outperform conventional methods, while maintaining similar quality to PD-DL models trained using existing supervised and unsupervised techniques.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2024 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11779509/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143070254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PROMISE: PROMPT-DRIVEN 3D MEDICAL IMAGE SEGMENTATION USING PRETRAINED IMAGE FOUNDATION MODELS. 承诺:使用预训练的图像基础模型进行即时驱动的3d医学图像分割。
Pub Date : 2024-05-01 Epub Date: 2024-08-22 DOI: 10.1109/isbi56570.2024.10635207
Hao Li, Han Liu, Dewei Hu, Jiacheng Wang, Ipek Oguz

To address prevalent issues in medical imaging, such as data acquisition challenges and label availability, transfer learning from natural to medical image domains serves as a viable strategy to produce reliable segmentation results. However, several existing barriers between domains need to be broken down, including addressing contrast discrepancies, managing anatomical variability, and adapting 2D pretrained models for 3D segmentation tasks. In this paper, we propose ProMISe, a prompt-driven 3D medical image segmentation model using only a single point prompt to leverage knowledge from a pretrained 2D image foundation model. In particular, we use the pretrained vision transformer from the Segment Anything Model (SAM) and integrate lightweight adapters to extract depth-related (3D) spatial context without updating the pretrained weights. For robust results, a hybrid network with complementary encoders is designed, and a boundary-aware loss is proposed to achieve precise boundaries. We evaluate our model on two public datasets for colon and pancreas tumor segmentations, respectively. Compared to the state-of-the-art segmentation methods with and without prompt engineering, our proposed method achieves superior performance. The code is publicly available at https://github.com/MedICL-VU/ProMISe.

为了解决医学成像中普遍存在的问题,例如数据采集挑战和标签可用性,从自然图像域到医学图像域的迁移学习是产生可靠分割结果的可行策略。然而,需要打破域之间的几个现有障碍,包括解决对比度差异,管理解剖变异性,以及将2D预训练模型用于3D分割任务。在本文中,我们提出了ProMISe,这是一个提示驱动的3D医学图像分割模型,仅使用单点提示来利用预训练的2D图像基础模型中的知识。特别是,我们使用来自分段任意模型(SAM)的预训练视觉转换器并集成轻量级适配器来提取深度相关(3D)空间上下文,而无需更新预训练的权重。为了保证结果的鲁棒性,设计了具有互补编码器的混合网络,并提出了边界感知损失来实现精确的边界。我们分别在结肠和胰腺肿瘤分割的两个公共数据集上评估了我们的模型。与目前最先进的分割方法相比,我们提出的方法具有更好的性能。该代码可在https://github.com/MedICL-VU/ProMISe上公开获得。
{"title":"PROMISE: PROMPT-DRIVEN 3D MEDICAL IMAGE SEGMENTATION USING PRETRAINED IMAGE FOUNDATION MODELS.","authors":"Hao Li, Han Liu, Dewei Hu, Jiacheng Wang, Ipek Oguz","doi":"10.1109/isbi56570.2024.10635207","DOIUrl":"10.1109/isbi56570.2024.10635207","url":null,"abstract":"<p><p>To address prevalent issues in medical imaging, such as data acquisition challenges and label availability, transfer learning from natural to medical image domains serves as a viable strategy to produce reliable segmentation results. However, several existing barriers between domains need to be broken down, including addressing contrast discrepancies, managing anatomical variability, and adapting 2D pretrained models for 3D segmentation tasks. In this paper, we propose ProMISe, a prompt-driven 3D medical image segmentation model using only a single point prompt to leverage knowledge from a pretrained 2D image foundation model. In particular, we use the pretrained vision transformer from the Segment Anything Model (SAM) and integrate lightweight adapters to extract depth-related (3D) spatial context without updating the pretrained weights. For robust results, a hybrid network with complementary encoders is designed, and a boundary-aware loss is proposed to achieve precise boundaries. We evaluate our model on two public datasets for colon and pancreas tumor segmentations, respectively. Compared to the state-of-the-art segmentation methods with and without prompt engineering, our proposed method achieves superior performance. The code is publicly available at https://github.com/MedICL-VU/ProMISe.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2024 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12128788/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144210446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ROBUST QUANTIFICATION OF PERCENT EMPHYSEMA ON CT VIA DOMAIN ATTENTION: THE MULTI-ETHNIC STUDY OF ATHEROSCLEROSIS (MESA) LUNG STUDY. 通过域注意力对 CT 上肺气肿百分比进行稳健量化:动脉粥样硬化多种族研究(MESA)肺研究。
Pub Date : 2024-05-01 Epub Date: 2024-08-22 DOI: 10.1109/isbi56570.2024.10635299
Xuzhe Zhang, Elsa D Angelini, Eric A Hoffman, Karol E Watson, Benjamin M Smith, R Graham Barr, Andrew F Laine

Robust quantification of pulmonary emphysema on computed tomography (CT) remains challenging for large-scale research studies that involve scans from different scanner types and for translation to clinical scans. Although the domain shifts in different CT scanners are subtle compared to shifts existing in other modalities (e.g., MRI) or cross-modality, emphysema is highly sensitive to it. Such subtle difference limits the application of general domain adaptation methods, such as image translation-based methods, as the contrast difference is too subtle to be distinguished. Existing studies have explored several directions to tackle this challenge, including density correction, noise filtering, regression, hidden Markov measure field (HMMF) model-based segmentation, and volume-adjusted lung density. Despite some promising results, previous studies either required a tedious workflow or eliminated opportunities for downstream emphysema subtyping, limiting efficient adaptation on a large-scale study. To alleviate this dilemma, we developed an end-to-end deep learning framework based on an existing HMMF segmentation framework. We first demonstrate that a regular UNet cannot replicate the existing HMMF results because of the lack of scanner priors. We then design a novel domain attention block, a simple yet efficient cross-modal block to fuse image visual features with quantitative scanner priors (a sequence), which significantly improves the results.

计算机断层扫描(CT)上肺气肿的可靠定量对于涉及不同类型扫描仪扫描的大规模研究以及转化为临床扫描仍具有挑战性。虽然与其他模式(如核磁共振成像)或跨模式相比,不同 CT 扫描仪的域偏移是微妙的,但肺气肿对其高度敏感。这种微妙的差异限制了一般域适应方法(如基于图像平移的方法)的应用,因为对比度差异过于微妙,难以区分。现有研究已经探索了多个方向来应对这一挑战,包括密度校正、噪声过滤、回归、基于隐马尔可夫测量场(HMMF)模型的分割以及体积调整肺密度。尽管取得了一些有希望的结果,但之前的研究要么需要繁琐的工作流程,要么消除了下游肺气肿亚型的机会,限制了大规模研究的高效适应性。为了缓解这一困境,我们基于现有的 HMMF 细分框架开发了端到端的深度学习框架。我们首先证明,由于缺乏扫描仪先验,常规的 UNet 无法复制现有的 HMMF 结果。然后,我们设计了一个新颖的领域关注区块,这是一个简单而高效的跨模态区块,用于将图像视觉特征与定量扫描仪前验(序列)融合在一起,从而显著改善了结果。
{"title":"ROBUST QUANTIFICATION OF PERCENT EMPHYSEMA ON CT VIA DOMAIN ATTENTION: THE MULTI-ETHNIC STUDY OF ATHEROSCLEROSIS (MESA) LUNG STUDY.","authors":"Xuzhe Zhang, Elsa D Angelini, Eric A Hoffman, Karol E Watson, Benjamin M Smith, R Graham Barr, Andrew F Laine","doi":"10.1109/isbi56570.2024.10635299","DOIUrl":"10.1109/isbi56570.2024.10635299","url":null,"abstract":"<p><p>Robust quantification of pulmonary emphysema on computed tomography (CT) remains challenging for large-scale research studies that involve scans from different scanner types and for translation to clinical scans. Although the domain shifts in different CT scanners are subtle compared to shifts existing in other modalities (e.g., MRI) or cross-modality, emphysema is highly sensitive to it. Such subtle difference limits the application of general domain adaptation methods, such as image translation-based methods, as the contrast difference is too subtle to be distinguished. Existing studies have explored several directions to tackle this challenge, including density correction, noise filtering, regression, hidden Markov measure field (HMMF) model-based segmentation, and volume-adjusted lung density. Despite some promising results, previous studies either required a tedious workflow or eliminated opportunities for downstream emphysema subtyping, limiting efficient adaptation on a large-scale study. To alleviate this dilemma, we developed an end-to-end deep learning framework based on an existing HMMF segmentation framework. We first demonstrate that a regular UNet cannot replicate the existing HMMF results because of the lack of scanner priors. We then design a novel domain attention block, a simple yet efficient cross-modal block to fuse image visual features with quantitative scanner priors (a sequence), which significantly improves the results.</p>","PeriodicalId":74566,"journal":{"name":"Proceedings. IEEE International Symposium on Biomedical Imaging","volume":"2024 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11388062/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142302995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Proceedings. IEEE International Symposium on Biomedical Imaging
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1