首页 > 最新文献

Proceedings. IEEE International Conference on Computer Vision最新文献

英文 中文
Enhancing Modality-Agnostic Representations via Meta-learning for Brain Tumor Segmentation. 通过元学习增强脑肿瘤分割的模态诊断表征。
Pub Date : 2023-10-01 DOI: 10.1109/iccv51070.2023.01958
Aishik Konwer, Xiaoling Hu, Joseph Bae, Xuan Xu, Chao Chen, Prateek Prasanna

In medical vision, different imaging modalities provide complementary information. However, in practice, not all modalities may be available during inference or even training. Previous approaches, e.g., knowledge distillation or image synthesis, often assume the availability of full modalities for all subjects during training; this is unrealistic and impractical due to the variability in data collection across sites. We propose a novel approach to learn enhanced modality-agnostic representations by employing a meta-learning strategy in training, even when only limited full modality samples are available. Meta-learning enhances partial modality representations to full modality representations by meta-training on partial modality data and meta-testing on limited full modality samples. Additionally, we co-supervise this feature enrichment by introducing an auxiliary adversarial learning branch. More specifically, a missing modality detector is used as a discriminator to mimic the full modality setting. Our segmentation framework significantly outperforms state-of-the-art brain tumor segmentation techniques in missing modality scenarios.

在医学视觉中,不同的成像模式可提供互补信息。然而,在实际推理甚至训练过程中,并非所有模式都可用。以往的方法,如知识提炼或图像合成,通常假定在训练过程中所有受试者都能获得全部模态信息;由于不同地点的数据收集存在差异,这种假定既不现实也不可行。我们提出了一种新方法,通过在训练中采用元学习策略来学习增强的模态无关表征,即使只有有限的全模态样本可用。元学习通过对部分模态数据进行元训练和对有限的全模态样本进行元测试,将部分模态表征增强为全模态表征。此外,我们还通过引入一个辅助对抗学习分支来共同监督这种特征增强。更具体地说,缺失模态检测器被用作模拟全模态设置的判别器。在缺失模态场景下,我们的分割框架明显优于最先进的脑肿瘤分割技术。
{"title":"Enhancing Modality-Agnostic Representations via Meta-learning for Brain Tumor Segmentation.","authors":"Aishik Konwer, Xiaoling Hu, Joseph Bae, Xuan Xu, Chao Chen, Prateek Prasanna","doi":"10.1109/iccv51070.2023.01958","DOIUrl":"10.1109/iccv51070.2023.01958","url":null,"abstract":"<p><p>In medical vision, different imaging modalities provide complementary information. However, in practice, not all modalities may be available during inference or even training. Previous approaches, e.g., knowledge distillation or image synthesis, often assume the availability of full modalities for all subjects during training; this is unrealistic and impractical due to the variability in data collection across sites. We propose a novel approach to learn enhanced modality-agnostic representations by employing a meta-learning strategy in training, even when only limited full modality samples are available. Meta-learning enhances partial modality representations to full modality representations by meta-training on partial modality data and meta-testing on limited full modality samples. Additionally, we co-supervise this feature enrichment by introducing an auxiliary adversarial learning branch. More specifically, a missing modality detector is used as a discriminator to mimic the full modality setting. Our segmentation framework significantly outperforms state-of-the-art brain tumor segmentation techniques in missing modality scenarios.</p>","PeriodicalId":74564,"journal":{"name":"Proceedings. IEEE International Conference on Computer Vision","volume":"2023 ","pages":"21358-21368"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11087061/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140913360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SimpleClick: Interactive Image Segmentation with Simple Vision Transformers. SimpleClick:利用简单视觉变换器进行交互式图像分割
Pub Date : 2023-10-01 DOI: 10.1109/iccv51070.2023.02037
Qin Liu, Zhenlin Xu, Gedas Bertasius, Marc Niethammer

Click-based interactive image segmentation aims at extracting objects with a limited user clicking. A hierarchical backbone is the de-facto architecture for current methods. Recently, the plain, non-hierarchical Vision Transformer (ViT) has emerged as a competitive backbone for dense prediction tasks. This design allows the original ViT to be a foundation model that can be finetuned for downstream tasks without redesigning a hierarchical backbone for pretraining. Although this design is simple and has been proven effective, it has not yet been explored for interactive image segmentation. To fill this gap, we propose SimpleClick, the first interactive segmentation method that leverages a plain backbone. Based on the plain backbone, we introduce a symmetric patch embedding layer that encodes clicks into the backbone with minor modifications to the backbone itself. With the plain backbone pretrained as a masked autoencoder (MAE), SimpleClick achieves state-of-the-art performance. Remarkably, our method achieves 4.15 NoC@90 on SBD, improving 21.8% over the previous best result. Extensive evaluation on medical images demonstrates the generalizability of our method. We provide a detailed computational analysis, highlighting the suitability of our method as a practical annotation tool.

基于点击的交互式图像分割旨在通过有限的用户点击来提取对象。分层骨干是当前方法的事实架构。最近,普通的非分层视觉转换器(ViT)已成为高密度预测任务中具有竞争力的骨干。这种设计使原始的 ViT 成为一个基础模型,可以针对下游任务进行微调,而无需重新设计分层骨干进行预训练。虽然这种设计简单有效,但在交互式图像分割方面还没有进行过探索。为了填补这一空白,我们提出了 SimpleClick,这是第一种利用普通骨干网的交互式分割方法。在普通骨干网的基础上,我们引入了一个对称补丁嵌入层,只需对骨干网本身稍作修改,就能将点击编码到骨干网中。通过对普通骨干网进行掩码自动编码器(MAE)预训练,SimpleClick 实现了最先进的性能。值得注意的是,我们的方法在 SBD 上实现了 4.15 NoC@90,比之前的最佳结果提高了 21.8%。在医学图像上的广泛评估证明了我们方法的通用性。我们提供了详细的计算分析,强调了我们的方法作为实用注释工具的适用性。
{"title":"SimpleClick: Interactive Image Segmentation with Simple Vision Transformers.","authors":"Qin Liu, Zhenlin Xu, Gedas Bertasius, Marc Niethammer","doi":"10.1109/iccv51070.2023.02037","DOIUrl":"10.1109/iccv51070.2023.02037","url":null,"abstract":"<p><p>Click-based interactive image segmentation aims at extracting objects with a limited user clicking. A hierarchical backbone is the <i>de-facto</i> architecture for current methods. Recently, the plain, non-hierarchical Vision Transformer (ViT) has emerged as a competitive backbone for dense prediction tasks. This design allows the original ViT to be a foundation model that can be finetuned for downstream tasks without redesigning a hierarchical backbone for pretraining. Although this design is simple and has been proven effective, it has not yet been explored for interactive image segmentation. To fill this gap, we propose SimpleClick, the first interactive segmentation method that leverages a plain backbone. Based on the plain backbone, we introduce a symmetric patch embedding layer that encodes clicks into the backbone with minor modifications to the backbone itself. With the plain backbone pretrained as a masked autoencoder (MAE), SimpleClick achieves state-of-the-art performance. Remarkably, our method achieves <b>4.15</b> NoC@90 on SBD, improving <b>21.8%</b> over the previous best result. Extensive evaluation on medical images demonstrates the generalizability of our method. We provide a detailed computational analysis, highlighting the suitability of our method as a practical annotation tool.</p>","PeriodicalId":74564,"journal":{"name":"Proceedings. IEEE International Conference on Computer Vision","volume":"2023 ","pages":"22233-22243"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11378330/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142156828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PGFed: Personalize Each Client's Global Objective for Federated Learning. PGFed:个性化每个客户的全球目标,实现联合学习。
Pub Date : 2023-10-01 Epub Date: 2024-01-15 DOI: 10.1109/iccv51070.2023.00365
Jun Luo, Matias Mendieta, Chen Chen, Shandong Wu

Personalized federated learning has received an upsurge of attention due to the mediocre performance of conventional federated learning (FL) over heterogeneous data. Unlike conventional FL which trains a single global consensus model, personalized FL allows different models for different clients. However, existing personalized FL algorithms only implicitly transfer the collaborative knowledge across the federation by embedding the knowledge into the aggregated model or regularization. We observed that this implicit knowledge transfer fails to maximize the potential of each client's empirical risk toward other clients. Based on our observation, in this work, we propose Personalized Global Federated Learning (PGFed), a novel personalized FL framework that enables each client to personalize its own global objective by explicitly and adaptively aggregating the empirical risks of itself and other clients. To avoid massive (O(N2)) communication overhead and potential privacy leakage while achieving this, each client's risk is estimated through a first-order approximation for other clients' adaptive risk aggregation. On top of PGFed, we develop a momentum upgrade, dubbed PGFedMo, to more efficiently utilize clients' empirical risks. Our extensive experiments on four datasets under different federated settings show consistent improvements of PGFed over previous state-of-the-art methods. The code is publicly available at https://github.com/ljaiverson/pgfed.

由于传统的联合学习(FL)在异构数据上表现平平,个性化联合学习受到了越来越多的关注。传统的联合学习只训练一个全局共识模型,与之不同的是,个性化联合学习允许不同的客户使用不同的模型。然而,现有的个性化联合学习算法只是通过将知识嵌入到聚合模型或正则化中来隐式地在联合学习中传递协作知识。我们发现,这种隐式知识转移无法最大限度地发挥每个客户对其他客户的经验风险潜力。基于我们的观察,在这项工作中,我们提出了个性化全局联盟学习(PGFed),这是一种新型的个性化 FL 框架,它能让每个客户端通过明确、自适应地聚合自身和其他客户端的经验风险来个性化自己的全局目标。为了避免大量(O(N2))通信开销和潜在的隐私泄露,每个客户端的风险都是通过对其他客户端的自适应风险聚合进行一阶近似来估算的。在 PGFed 的基础上,我们还开发了一种势头升级版,称为 PGFedMo,以更有效地利用客户的经验风险。我们在不同联盟设置下对四个数据集进行了广泛的实验,结果表明 PGFed 与之前最先进的方法相比有了持续的改进。代码可在 https://github.com/ljaiverson/pgfed 公开获取。
{"title":"PGFed: Personalize Each Client's Global Objective for Federated Learning.","authors":"Jun Luo, Matias Mendieta, Chen Chen, Shandong Wu","doi":"10.1109/iccv51070.2023.00365","DOIUrl":"https://doi.org/10.1109/iccv51070.2023.00365","url":null,"abstract":"<p><p>Personalized federated learning has received an upsurge of attention due to the mediocre performance of conventional federated learning (FL) over heterogeneous data. Unlike conventional FL which trains a single global consensus model, personalized FL allows different models for different clients. However, existing personalized FL algorithms only <b>implicitly</b> transfer the collaborative knowledge across the federation by embedding the knowledge into the aggregated model or regularization. We observed that this implicit knowledge transfer fails to maximize the potential of each client's empirical risk toward other clients. Based on our observation, in this work, we propose <b>P</b>ersonalized <b>G</b>lobal <b>Fed</b>erated Learning (PGFed), a novel personalized FL framework that enables each client to <b>personalize</b> its own <b>global</b> objective by <b>explicitly</b> and adaptively aggregating the empirical risks of itself and other clients. To avoid massive <math><mrow><mrow><mo>(</mo><mrow><mi>O</mi><mrow><mo>(</mo><mrow><msup><mi>N</mi><mn>2</mn></msup></mrow><mo>)</mo></mrow></mrow><mo>)</mo></mrow></mrow></math> communication overhead and potential privacy leakage while achieving this, each client's risk is estimated through a first-order approximation for other clients' adaptive risk aggregation. On top of PGFed, we develop a momentum upgrade, dubbed PGFedMo, to more efficiently utilize clients' empirical risks. Our extensive experiments on four datasets under different federated settings show consistent improvements of PGFed over previous state-of-the-art methods. The code is publicly available at https://github.com/ljaiverson/pgfed.</p>","PeriodicalId":74564,"journal":{"name":"Proceedings. IEEE International Conference on Computer Vision","volume":"2023 ","pages":"3923-3933"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11024864/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140853842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Devil is in the Upsampling: Architectural Decisions Made Simpler for Denoising with Deep Image Prior. 魔鬼就在升采样中:利用深度图像先验简化去噪的架构决策
Pub Date : 2023-10-01 Epub Date: 2024-01-15 DOI: 10.1109/ICCV51070.2023.01140
Yilin Liu, Jiang Li, Yunkui Pang, Dong Nie, Pew-Thian Yap

Deep Image Prior (DIP) shows that some network architectures inherently tend towards generating smooth images while resisting noise, a phenomenon known as spectral bias. Image denoising is a natural application of this property. Although denoising with DIP mitigates the need for large training sets, two often intertwined practical challenges need to be overcome: architectural design and noise fitting. Existing methods either handcraft or search for suitable architectures from a vast design space, due to the limited understanding of how architectural choices affect the denoising outcome. In this study, we demonstrate from a frequency perspective that unlearnt upsampling is the main driving force behind the denoising phenomenon with DIP. This finding leads to straightforward strategies for identifying a suitable architecture for every image without laborious search. Extensive experiments show that the estimated architectures achieve superior denoising results than existing methods with up to 95% fewer parameters. Thanks to this under-parameterization, the resulting architectures are less prone to noise-fitting.

深度图像优先(DIP)表明,某些网络架构天生倾向于生成平滑图像,同时抵御噪声,这种现象被称为光谱偏差。图像去噪就是这一特性的自然应用。虽然使用 DIP 去噪可以减少对大型训练集的需求,但仍需要克服两个经常交织在一起的实际挑战:架构设计和噪声拟合。由于对架构选择如何影响去噪结果的了解有限,现有方法要么是手工制作,要么是从广阔的设计空间中寻找合适的架构。在本研究中,我们从频率的角度证明,未学习的上采样是 DIP 去噪现象背后的主要驱动力。这一发现为我们提供了无需费力搜索即可为每幅图像确定合适架构的直接策略。广泛的实验表明,估算出的架构比现有方法的去噪效果更好,参数数量最多可减少 95%。得益于这种低参数化,所得到的架构不易受噪声拟合的影响。
{"title":"The Devil is in the Upsampling: Architectural Decisions Made Simpler for Denoising with Deep Image Prior.","authors":"Yilin Liu, Jiang Li, Yunkui Pang, Dong Nie, Pew-Thian Yap","doi":"10.1109/ICCV51070.2023.01140","DOIUrl":"10.1109/ICCV51070.2023.01140","url":null,"abstract":"<p><p>Deep Image Prior (DIP) shows that some network architectures inherently tend towards generating smooth images while resisting noise, a phenomenon known as spectral bias. Image denoising is a natural application of this property. Although denoising with DIP mitigates the need for large training sets, two often intertwined practical challenges need to be overcome: architectural design and noise fitting. Existing methods either handcraft or search for suitable architectures from a vast design space, due to the limited understanding of how architectural choices affect the denoising outcome. In this study, we demonstrate from a frequency perspective that unlearnt upsampling is the main driving force behind the denoising phenomenon with DIP. This finding leads to straightforward strategies for identifying a suitable architecture for every image without laborious search. Extensive experiments show that the estimated architectures achieve superior denoising results than existing methods with up to 95% fewer parameters. Thanks to this under-parameterization, the resulting architectures are less prone to noise-fitting.</p>","PeriodicalId":74564,"journal":{"name":"Proceedings. IEEE International Conference on Computer Vision","volume":"2023 ","pages":"12374-12383"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11078028/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140900571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving Representation Learning for Histopathologic Images with Cluster Constraints. 利用集群约束改进组织病理学图像的表征学习
Pub Date : 2023-01-01 Epub Date: 2024-01-15 DOI: 10.1109/iccv51070.2023.01957
Weiyi Wu, Chongyang Gao, Joseph DiPalma, Soroush Vosoughi, Saeed Hassanpour

Recent advances in whole-slide image (WSI) scanners and computational capabilities have significantly propelled the application of artificial intelligence in histopathology slide analysis. While these strides are promising, current supervised learning approaches for WSI analysis come with the challenge of exhaustively labeling high-resolution slides-a process that is both labor-intensive and timeconsuming. In contrast, self-supervised learning (SSL) pretraining strategies are emerging as a viable alternative, given that they don't rely on explicit data annotations. These SSL strategies are quickly bridging the performance disparity with their supervised counterparts. In this context, we introduce an SSL framework. This framework aims for transferable representation learning and semantically meaningful clustering by synergizing invariance loss and clustering loss in WSI analysis. Notably, our approach outperforms common SSL methods in downstream classification and clustering tasks, as evidenced by tests on the Camelyon16 and a pancreatic cancer dataset. The code and additional details are accessible at https://github.com/wwyi1828/CluSiam.

全切片图像(WSI)扫描仪和计算能力的最新进展极大地推动了人工智能在组织病理学切片分析中的应用。虽然这些进步前景广阔,但目前用于 WSI 分析的监督学习方法面临着对高分辨率切片进行详尽标注的挑战--这一过程既耗费人力又耗费时间。相比之下,自监督学习(SSL)预训练策略由于不依赖明确的数据注释,正在成为一种可行的替代方法。这些 SSL 策略正在迅速缩小与监督策略之间的性能差距。在此背景下,我们引入了 SSL 框架。该框架旨在通过协同 WSI 分析中的不变性损失和聚类损失,实现可迁移表示学习和有语义的聚类。值得注意的是,在下游分类和聚类任务中,我们的方法优于常见的 SSL 方法,在 Camelyon16 和胰腺癌数据集上的测试证明了这一点。代码和更多详细信息请访问 https://github.com/wwyi1828/CluSiam。
{"title":"Improving Representation Learning for Histopathologic Images with Cluster Constraints.","authors":"Weiyi Wu, Chongyang Gao, Joseph DiPalma, Soroush Vosoughi, Saeed Hassanpour","doi":"10.1109/iccv51070.2023.01957","DOIUrl":"10.1109/iccv51070.2023.01957","url":null,"abstract":"<p><p>Recent advances in whole-slide image (WSI) scanners and computational capabilities have significantly propelled the application of artificial intelligence in histopathology slide analysis. While these strides are promising, current supervised learning approaches for WSI analysis come with the challenge of exhaustively labeling high-resolution slides-a process that is both labor-intensive and timeconsuming. In contrast, self-supervised learning (SSL) pretraining strategies are emerging as a viable alternative, given that they don't rely on explicit data annotations. These SSL strategies are quickly bridging the performance disparity with their supervised counterparts. In this context, we introduce an SSL framework. This framework aims for transferable representation learning and semantically meaningful clustering by synergizing invariance loss and clustering loss in WSI analysis. Notably, our approach outperforms common SSL methods in downstream classification and clustering tasks, as evidenced by tests on the Camelyon16 and a pancreatic cancer dataset. The code and additional details are accessible at https://github.com/wwyi1828/CluSiam.</p>","PeriodicalId":74564,"journal":{"name":"Proceedings. IEEE International Conference on Computer Vision","volume":"2023 ","pages":"21347-21357"},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11062482/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140872369","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning Fixed Points in Generative Adversarial Networks: From Image-to-Image Translation to Disease Detection and Localization. 生成对抗网络中的不动点学习:从图像到图像的转换到疾病检测和定位。
Pub Date : 2019-11-01 Epub Date: 2020-02-27 DOI: 10.1109/iccv.2019.00028
Md Mahfuzur Rahman Siddiquee, Zongwei Zhou, Nima Tajbakhsh, Ruibin Feng, Michael B Gotway, Yoshua Bengio, Jianming Liang

Generative adversarial networks (GANs) have ushered in a revolution in image-to-image translation. The development and proliferation of GANs raises an interesting question: can we train a GAN to remove an object, if present, from an image while otherwise preserving the image? Specifically, can a GAN "virtually heal" anyone by turning his medical image, with an unknown health status (diseased or healthy), into a healthy one, so that diseased regions could be revealed by subtracting those two images? Such a task requires a GAN to identify a minimal subset of target pixels for domain translation, an ability that we call fixed-point translation, which no GAN is equipped with yet. Therefore, we propose a new GAN, called Fixed-Point GAN, trained by (1) supervising same-domain translation through a conditional identity loss, and (2) regularizing cross-domain translation through revised adversarial, domain classification, and cycle consistency loss. Based on fixed-point translation, we further derive a novel framework for disease detection and localization using only image-level annotation. Qualitative and quantitative evaluations demonstrate that the proposed method outperforms the state of the art in multi-domain image-to-image translation and that it surpasses predominant weakly-supervised localization methods in both disease detection and localization. Implementation is available at https://github.com/jlianglab/Fixed-Point-GAN.

生成对抗网络(gan)引领了一场图像到图像翻译的革命。GAN的发展和扩散提出了一个有趣的问题:我们能否训练GAN从图像中删除物体(如果存在),同时保留图像?具体来说,GAN是否可以通过将健康状况未知(患病或健康)的医学图像转换为健康图像,从而通过减去这两个图像来显示患病区域,从而“虚拟地治愈”任何人?这样的任务需要GAN识别目标像素的最小子集进行域翻译,这种能力我们称之为定点翻译,目前还没有GAN具备这种能力。因此,我们提出了一种新的GAN,称为定点GAN,通过(1)通过条件恒等损失监督同域翻译,以及(2)通过修订的对抗性,领域分类和循环一致性损失来规范跨域翻译。基于定点翻译,我们进一步推导了一种仅使用图像级注释的疾病检测和定位的新框架。定性和定量评估表明,所提出的方法在多域图像到图像转换方面优于目前的技术水平,并且在疾病检测和定位方面优于主流的弱监督定位方法。具体实现请访问https://github.com/jlianglab/Fixed-Point-GAN。
{"title":"Learning Fixed Points in Generative Adversarial Networks: From Image-to-Image Translation to Disease Detection and Localization.","authors":"Md Mahfuzur Rahman Siddiquee,&nbsp;Zongwei Zhou,&nbsp;Nima Tajbakhsh,&nbsp;Ruibin Feng,&nbsp;Michael B Gotway,&nbsp;Yoshua Bengio,&nbsp;Jianming Liang","doi":"10.1109/iccv.2019.00028","DOIUrl":"https://doi.org/10.1109/iccv.2019.00028","url":null,"abstract":"<p><p>Generative adversarial networks (GANs) have ushered in a revolution in image-to-image translation. The development and proliferation of GANs raises an interesting question: can we train a GAN to remove an object, if present, from an image while otherwise preserving the image? Specifically, can a GAN \"virtually heal\" anyone by turning his medical image, with an unknown health status (diseased or healthy), into a healthy one, so that diseased regions could be revealed by subtracting those two images? Such a task requires a GAN to identify a minimal subset of target pixels for domain translation, an ability that we call fixed-point translation, which no GAN is equipped with yet. Therefore, we propose a new GAN, called Fixed-Point GAN, trained by (1) supervising same-domain translation through a conditional identity loss, and (2) regularizing cross-domain translation through revised adversarial, domain classification, and cycle consistency loss. Based on fixed-point translation, we further derive a novel framework for disease detection and localization using only image-level annotation. Qualitative and quantitative evaluations demonstrate that the proposed method outperforms the state of the art in multi-domain image-to-image translation and that it surpasses predominant weakly-supervised localization methods in both disease detection and localization. Implementation is available at https://github.com/jlianglab/Fixed-Point-GAN.</p>","PeriodicalId":74564,"journal":{"name":"Proceedings. IEEE International Conference on Computer Vision","volume":"2019 ","pages":"191-200"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/iccv.2019.00028","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38108077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 74
Dilated Convolutional Neural Networks for Sequential Manifold-valued Data. 序列流形值数据的扩展卷积神经网络。
Pub Date : 2019-10-01 Epub Date: 2020-02-27 DOI: 10.1109/iccv.2019.01072
Xingjian Zhen, Rudrasis Chakraborty, Nicholas Vogt, Barbara B Bendlin, Vikas Singh

Efforts are underway to study ways via which the power of deep neural networks can be extended to non-standard data types such as structured data (e.g., graphs) or manifold-valued data (e.g., unit vectors or special matrices). Often, sizable empirical improvements are possible when the geometry of such data spaces are incorporated into the design of the model, architecture, and the algorithms. Motivated by neuroimaging applications, we study formulations where the data are sequential manifold-valued measurements. This case is common in brain imaging, where the samples correspond to symmetric positive definite matrices or orientation distribution functions. Instead of a recurrent model which poses computational/technical issues, and inspired by recent results showing the viability of dilated convolutional models for sequence prediction, we develop a dilated convolutional neural network architecture for this task. On the technical side, we show how the modules needed in our network can be derived while explicitly taking the Riemannian manifold structure into account. We show how the operations needed can leverage known results for calculating the weighted Fréchet Mean (wFM). Finally, we present scientific results for group difference analysis in Alzheimer's disease (AD) where the groups are derived using AD pathology load: here the model finds several brain fiber bundles that are related to AD even when the subjects are all still cognitively healthy.

人们正在努力研究如何将深度神经网络的能力扩展到非标准数据类型,如结构化数据(如图)或流形值数据(如单位向量或特殊矩阵)。通常,当将此类数据空间的几何结构合并到模型、体系结构和算法的设计中时,可以实现相当大的经验改进。在神经成像应用的激励下,我们研究了数据是顺序流形值测量的公式。这种情况在脑成像中很常见,其中样本对应于对称正定矩阵或方向分布函数。代替递归模型带来的计算/技术问题,并受到最近显示扩展卷积模型用于序列预测可行性的结果的启发,我们为这项任务开发了一个扩展卷积神经网络架构。在技术方面,我们展示了如何在明确考虑黎曼流形结构的同时推导出网络中所需的模块。我们将展示所需的操作如何利用已知结果来计算加权fr平均(wFM)。最后,我们提出了阿尔茨海默病(AD)组差异分析的科学结果,其中使用AD病理负荷推导出组:在这里,模型发现了几个与AD相关的脑纤维束,即使受试者都仍然认知健康。
{"title":"Dilated Convolutional Neural Networks for Sequential Manifold-valued Data.","authors":"Xingjian Zhen, Rudrasis Chakraborty, Nicholas Vogt, Barbara B Bendlin, Vikas Singh","doi":"10.1109/iccv.2019.01072","DOIUrl":"10.1109/iccv.2019.01072","url":null,"abstract":"<p><p>Efforts are underway to study ways via which the power of deep neural networks can be extended to non-standard data types such as structured data (e.g., graphs) or manifold-valued data (e.g., unit vectors or special matrices). Often, sizable empirical improvements are possible when the geometry of such data spaces are incorporated into the design of the model, architecture, and the algorithms. Motivated by neuroimaging applications, we study formulations where the data are sequential manifold-valued measurements. This case is common in brain imaging, where the samples correspond to symmetric positive definite matrices or orientation distribution functions. Instead of a recurrent model which poses computational/technical issues, and inspired by recent results showing the viability of dilated convolutional models for sequence prediction, we develop a dilated convolutional neural network architecture for this task. On the technical side, we show how the modules needed in our network can be derived while explicitly taking the Riemannian manifold structure into account. We show how the operations needed can leverage known results for calculating the weighted Fréchet Mean (wFM). Finally, we present scientific results for group difference analysis in Alzheimer's disease (AD) where the groups are derived using AD pathology load: here the model finds several brain fiber bundles that are related to AD even when the subjects are all still cognitively healthy.</p>","PeriodicalId":74564,"journal":{"name":"Proceedings. IEEE International Conference on Computer Vision","volume":"2019 ","pages":"10620-10630"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7220031/pdf/nihms-1058367.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37932355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DUAL-GLOW: Conditional Flow-Based Generative Model for Modality Transfer. DUAL-GLOW:基于条件流的模态迁移生成模型。
Pub Date : 2019-10-01 Epub Date: 2020-02-27 DOI: 10.1109/iccv.2019.01071
Haoliang Sun, Ronak Mehta, Hao H Zhou, Zhichun Huang, Sterling C Johnson, Vivek Prabhakaran, Vikas Singh

Positron emission tomography (PET) imaging is an imaging modality for diagnosing a number of neurological diseases. In contrast to Magnetic Resonance Imaging (MRI), PET is costly and involves injecting a radioactive substance into the patient. Motivated by developments in modality transfer in vision, we study the generation of certain types of PET images from MRI data. We derive new flow-based generative models which we show perform well in this small sample size regime (much smaller than dataset sizes available in standard vision tasks). Our formulation, DUAL-GLOW, is based on two invertible networks and a relation network that maps the latent spaces to each other. We discuss how given the prior distribution, learning the conditional distribution of PET given the MRI image reduces to obtaining the conditional distribution between the two latent codes w.r.t. the two image types. We also extend our framework to leverage "side" information (or attributes) when available. By controlling the PET generation through "conditioning" on age, our model is also able to capture brain FDG-PET (hypometabolism) changes, as a function of age. We present experiments on the Alzheimers Disease Neuroimaging Initiative (ADNI) dataset with 826 subjects, and obtain good performance in PET image synthesis, qualitatively and quantitatively better than recent works.

正电子发射断层扫描(PET)成像是诊断许多神经系统疾病的一种成像方式。与磁共振成像(MRI)相比,PET是昂贵的,并且需要向患者注射放射性物质。由于视觉模态转移的发展,我们研究了从MRI数据中生成某些类型的PET图像。我们得出了新的基于流的生成模型,我们证明它在这个小样本量范围内表现良好(远小于标准视觉任务中可用的数据集大小)。我们的公式,DUAL-GLOW,是基于两个可逆网络和一个相互映射潜在空间的关系网络。我们讨论了在给定先验分布的情况下,学习给定MRI图像的PET的条件分布如何简化为在两种图像类型之间获得两个潜在代码之间的条件分布。我们还扩展了框架,以便在可用时利用“侧”信息(或属性)。通过“调节”年龄来控制PET的产生,我们的模型也能够捕捉到大脑FDG-PET(低代谢)的变化,作为年龄的函数。我们在826名受试者的阿尔茨海默病神经成像倡议(ADNI)数据集上进行了实验,在PET图像合成方面取得了良好的性能,在定性和定量上都优于最近的研究成果。
{"title":"DUAL-GLOW: Conditional Flow-Based Generative Model for Modality Transfer.","authors":"Haoliang Sun,&nbsp;Ronak Mehta,&nbsp;Hao H Zhou,&nbsp;Zhichun Huang,&nbsp;Sterling C Johnson,&nbsp;Vivek Prabhakaran,&nbsp;Vikas Singh","doi":"10.1109/iccv.2019.01071","DOIUrl":"https://doi.org/10.1109/iccv.2019.01071","url":null,"abstract":"<p><p>Positron emission tomography (PET) imaging is an imaging modality for diagnosing a number of neurological diseases. In contrast to Magnetic Resonance Imaging (MRI), PET is costly and involves injecting a radioactive substance into the patient. Motivated by developments in modality transfer in vision, we study the generation of certain types of PET images from MRI data. We derive new flow-based generative models which we show perform well in this small sample size regime (much smaller than dataset sizes available in standard vision tasks). Our formulation, DUAL-GLOW, is based on two invertible networks and a relation network that maps the latent spaces to each other. We discuss how given the prior distribution, learning the conditional distribution of PET given the MRI image reduces to obtaining the conditional distribution between the two latent codes w.r.t. the two image types. We also extend our framework to leverage \"side\" information (or attributes) when available. By controlling the PET generation through \"conditioning\" on age, our model is also able to capture brain FDG-PET (hypometabolism) changes, as a function of age. We present experiments on the Alzheimers Disease Neuroimaging Initiative (ADNI) dataset with 826 subjects, and obtain good performance in PET image synthesis, qualitatively and quantitatively better than recent works.</p>","PeriodicalId":74564,"journal":{"name":"Proceedings. IEEE International Conference on Computer Vision","volume":"2019 ","pages":"10610-10619"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/iccv.2019.01071","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39893370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 40
Scene Graph Prediction with Limited Labels. 有限标签的场景图预测。
Pub Date : 2019-10-01 Epub Date: 2020-02-27 DOI: 10.1109/iccv.2019.00267
Vincent S Chen, Paroma Varma, Ranjay Krishna, Michael Bernstein, Christopher Ré, Li Fei-Fei

Visual knowledge bases such as Visual Genome power numerous applications in computer vision, including visual question answering and captioning, but suffer from sparse, incomplete relationships. All scene graph models to date are limited to training on a small set of visual relationships that have thousands of training labels each. Hiring human annotators is expensive, and using textual knowledge base completion methods are incompatible with visual data. In this paper, we introduce a semi-supervised method that assigns probabilistic relationship labels to a large number of unlabeled images using few' labeled examples. We analyze visual relationships to suggest two types of image-agnostic features that are used to generate noisy heuristics, whose outputs are aggregated using a factor graph-based generative model. With as few as 10 labeled examples per relationship, the generative model creates enough training data to train any existing state-of-the-art scene graph model. We demonstrate that our method outperforms all baseline approaches on scene graph prediction by 5.16 recall@ 100 for PREDCLS. In our limited label setting, we define a complexity metric for relationships that serves as an indicator (R2 = 0.778) for conditions under which our method succeeds over transfer learning, the de-facto approach for training with limited labels.

视觉知识库(如Visual Genome)为计算机视觉领域的许多应用提供了动力,包括视觉问答和字幕,但存在稀疏、不完整的关系。到目前为止,所有的场景图模型都局限于训练一小组视觉关系,每个视觉关系都有数千个训练标签。雇用人工注释者是昂贵的,并且使用文本知识库补全方法与可视化数据不兼容。在本文中,我们引入了一种半监督方法,该方法使用很少的“标记示例”为大量未标记的图像分配概率关系标签。我们分析了视觉关系,提出了两种类型的图像不可知特征,用于生成噪声启发式,其输出使用基于因子图的生成模型进行聚合。每个关系只需10个标记示例,生成模型就可以创建足够的训练数据来训练任何现有的最先进的场景图模型。我们证明,我们的方法在PREDCLS的场景图预测上优于所有基线方法5.16 recall@ 100。在我们的有限标签设置中,我们定义了关系的复杂性度量,作为我们的方法优于迁移学习的条件的指标(R2 = 0.778),迁移学习是使用有限标签进行训练的实际方法。
{"title":"Scene Graph Prediction with Limited Labels.","authors":"Vincent S Chen,&nbsp;Paroma Varma,&nbsp;Ranjay Krishna,&nbsp;Michael Bernstein,&nbsp;Christopher Ré,&nbsp;Li Fei-Fei","doi":"10.1109/iccv.2019.00267","DOIUrl":"https://doi.org/10.1109/iccv.2019.00267","url":null,"abstract":"<p><p>Visual knowledge bases such as Visual Genome power numerous applications in computer vision, including visual question answering and captioning, but suffer from sparse, incomplete relationships. All scene graph models to date are limited to training on a small set of visual relationships that have thousands of training labels each. Hiring human annotators is expensive, and using textual knowledge base completion methods are incompatible with visual data. In this paper, we introduce a semi-supervised method that assigns probabilistic relationship labels to a large number of unlabeled images using few' labeled examples. We analyze visual relationships to suggest two types of image-agnostic features that are used to generate noisy heuristics, whose outputs are aggregated using a factor graph-based generative model. With as few as 10 labeled examples per relationship, the generative model creates enough training data to train any existing state-of-the-art scene graph model. We demonstrate that our method outperforms all baseline approaches on scene graph prediction by 5.16 recall@ 100 for PREDCLS. In our limited label setting, we define a complexity metric for relationships that serves as an indicator (R<sup>2</sup> = 0.778) for conditions under which our method succeeds over transfer learning, the de-facto approach for training with limited labels.</p>","PeriodicalId":74564,"journal":{"name":"Proceedings. IEEE International Conference on Computer Vision","volume":"2019 ","pages":"2580-2590"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/iccv.2019.00267","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37776489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Conditional Recurrent Flow: Conditional Generation of Longitudinal Samples with Applications to Neuroimaging. 条件循环流:纵向样本的条件生成与神经成像应用
Pub Date : 2019-10-01 Epub Date: 2020-02-27 DOI: 10.1109/iccv.2019.01079
Seong Jae Hwang, Zirui Tao, Won Hwa Kim, Vikas Singh

We develop a conditional generative model for longitudinal image datasets based on sequential invertible neural networks. Longitudinal image acquisitions are common in various scientific and biomedical studies where often each image sequence sample may also come together with various secondary (fixed or temporally dependent) measurements. The key goal is not only to estimate the parameters of a deep generative model for the given longitudinal data, but also to enable evaluation of how the temporal course of the generated longitudinal samples are influenced as a function of induced changes in the (secondary) temporal measurements (or events). Our proposed formulation incorporates recurrent subnetworks and temporal context gating, which provide a smooth transition in a temporal sequence of generated data that can be easily informed or modulated by secondary temporal conditioning variables. We show that the formulation works well despite the smaller sample sizes common in these applications. Our model is validated on two video datasets and a longitudinal Alzheimer's disease (AD) dataset for both quantitative and qualitative evaluations of the generated samples. Further, using our generated longitudinal image samples, we show that we can capture the pathological progressions in the brain that turn out to be consistent with the existing literature, and could facilitate various types of downstream statistical analysis.

我们开发了一种基于序列可逆神经网络的纵向图像数据集条件生成模型。纵向图像采集在各种科学和生物医学研究中很常见,其中每个图像序列样本往往还可能与各种二次测量(固定或时间相关)一起进行。我们的主要目标不仅是为给定的纵向数据估算深度生成模型的参数,还包括评估生成的纵向样本的时间过程如何受到(次要)时间测量(或事件)中诱导变化的影响。我们提出的方法包含了递归子网络和时间上下文门控,这为生成数据的时间序列提供了平滑过渡,可以很容易地通过次要的时间条件变量进行通知或调节。我们的研究表明,尽管这些应用中常见的样本量较小,但该模型仍能良好运行。我们的模型在两个视频数据集和一个纵向阿尔茨海默病(AD)数据集上进行了验证,对生成的样本进行了定量和定性评估。此外,利用我们生成的纵向图像样本,我们表明我们可以捕捉到大脑中的病理进展,这与现有文献一致,并可促进各种类型的下游统计分析。
{"title":"Conditional Recurrent Flow: Conditional Generation of Longitudinal Samples with Applications to Neuroimaging.","authors":"Seong Jae Hwang, Zirui Tao, Won Hwa Kim, Vikas Singh","doi":"10.1109/iccv.2019.01079","DOIUrl":"10.1109/iccv.2019.01079","url":null,"abstract":"<p><p>We develop a conditional generative model for longitudinal image datasets based on sequential invertible neural networks. Longitudinal image acquisitions are common in various scientific and biomedical studies where often each image sequence sample may also come together with various secondary (fixed or temporally dependent) measurements. The key goal is not only to estimate the parameters of a deep generative model for the given longitudinal data, but also to enable evaluation of how the temporal course of the generated longitudinal samples are influenced as a function of induced changes in the (secondary) temporal measurements (or events). Our proposed formulation incorporates recurrent subnetworks and temporal context gating, which provide a smooth transition in a temporal sequence of generated data that can be easily informed or modulated by secondary temporal conditioning variables. We show that the formulation works well despite the smaller sample sizes common in these applications. Our model is validated on two video datasets and a longitudinal Alzheimer's disease (AD) dataset for both quantitative and qualitative evaluations of the generated samples. Further, using our generated longitudinal image samples, we show that we can capture the pathological progressions in the brain that turn out to be consistent with the existing literature, and could facilitate various types of downstream statistical analysis.</p>","PeriodicalId":74564,"journal":{"name":"Proceedings. IEEE International Conference on Computer Vision","volume":"2019 ","pages":"10691-10700"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7220239/pdf/nihms-1058360.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37932354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Proceedings. IEEE International Conference on Computer Vision
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1