首页 > 最新文献

2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)最新文献

英文 中文
DiSparse: Disentangled Sparsification for Multitask Model Compression 多任务模型压缩的解纠缠稀疏化
Pub Date : 2022-06-01 DOI: 10.1109/CVPR52688.2022.01206
Xing Sun, Ali Hassani, Zhangyang Wang, Gao Huang, Humphrey Shi
Despite the popularity of Model Compression and Mul-titask Learning, how to effectively compress a multitask model has been less thoroughly analyzed due to the chal-lenging entanglement of tasks in the parameter space. In this paper, we propose DiSparse, a simple, effective, and first-of-its-kind multitask pruning and sparse training scheme. We consider each task independently by disentangling the importance measurement and take the unani-mous decisions among all tasks when performing parame-ter pruning and selection. Our experimental results demon-strate superior performance on various configurations and settings compared to popular sparse training and pruning methods. Besides the effectiveness in compression, DiS-parse also provides a powerful tool to the multitask learning community. Surprisingly, we even observed better per-formance than some dedicated multitask learning methods in several cases despite the high model sparsity enforced by DiSparse. We analyzed the pruning masks generated with DiSparse and observed strikingly similar sparse net-work architecture identified by each task even before the training starts. We also observe the existence of a “water-shed” layer where the task relatedness sharply drops, implying no benefits in continued parameters sharing. Our code and models will be available at: https://github.com/SHI-Labs/DiSparse-Multitask-Model-Compression.
尽管模型压缩和多任务学习很受欢迎,但由于任务在参数空间中具有挑战性的纠缠,如何有效地压缩多任务模型一直没有得到深入的分析。在本文中,我们提出了一种简单、有效、首创的多任务剪枝和稀疏训练方案。我们通过解纠缠的重要性度量来独立考虑每个任务,并在进行参数修剪和选择时对所有任务进行一致决策。我们的实验结果表明,与流行的稀疏训练和修剪方法相比,该方法在各种配置和设置上具有优越的性能。除了在压缩方面的有效性,DiS-parse还为多任务学习社区提供了一个强大的工具。令人惊讶的是,在一些情况下,我们甚至观察到比一些专用的多任务学习方法更好的性能,尽管由DiSparse强制实现了高模型稀疏性。我们分析了由DiSparse生成的修剪掩模,并观察到甚至在训练开始之前,每个任务识别出的稀疏网络架构非常相似。我们还观察到“分水岭”层的存在,其中任务相关性急剧下降,这意味着继续共享参数没有好处。我们的代码和模型可在:https://github.com/SHI-Labs/DiSparse-Multitask-Model-Compression。
{"title":"DiSparse: Disentangled Sparsification for Multitask Model Compression","authors":"Xing Sun, Ali Hassani, Zhangyang Wang, Gao Huang, Humphrey Shi","doi":"10.1109/CVPR52688.2022.01206","DOIUrl":"https://doi.org/10.1109/CVPR52688.2022.01206","url":null,"abstract":"Despite the popularity of Model Compression and Mul-titask Learning, how to effectively compress a multitask model has been less thoroughly analyzed due to the chal-lenging entanglement of tasks in the parameter space. In this paper, we propose DiSparse, a simple, effective, and first-of-its-kind multitask pruning and sparse training scheme. We consider each task independently by disentangling the importance measurement and take the unani-mous decisions among all tasks when performing parame-ter pruning and selection. Our experimental results demon-strate superior performance on various configurations and settings compared to popular sparse training and pruning methods. Besides the effectiveness in compression, DiS-parse also provides a powerful tool to the multitask learning community. Surprisingly, we even observed better per-formance than some dedicated multitask learning methods in several cases despite the high model sparsity enforced by DiSparse. We analyzed the pruning masks generated with DiSparse and observed strikingly similar sparse net-work architecture identified by each task even before the training starts. We also observe the existence of a “water-shed” layer where the task relatedness sharply drops, implying no benefits in continued parameters sharing. Our code and models will be available at: https://github.com/SHI-Labs/DiSparse-Multitask-Model-Compression.","PeriodicalId":355552,"journal":{"name":"2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125858773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
ETHSeg: An Amodel Instance Segmentation Network and a Real-world Dataset for X-Ray Waste Inspection ETHSeg:用于x射线废弃物检测的模型实例分割网络和真实数据集
Pub Date : 2022-06-01 DOI: 10.1109/CVPR52688.2022.00232
Lingteng Qiu, Zhangyang Xiong, Xuhao Wang, Kenkun Liu, Yihan Li, Guanying Chen, Xiaoguang Han, Shuguang Cui
Waste inspection for packaged waste is an important step in the pipeline of waste disposal. Previous methods either rely on manual visual checking or RGB image-based inspection algorithm, requiring costly preparation procedures (e.g., open the bag and spread the waste items). Moreover, occluded items are very likely to be left out. Inspired by the fact that X-ray has a strong penetrating power to see through the bag and overlapping objects, we propose to perform waste inspection efficiently using X-ray images without the need to open the bag. We introduce a novel problem of instance-level waste segmentation in X-ray image for intelligent waste inspection, and contribute a real dataset consisting of 5,038 X-ray images (totally 30,881 waste items) with high-quality annotations (i.e., waste categories, object boxes, and instance-level masks) as a benchmark for this problem. As existing segmentation methods are mainly designed for natural images and cannot take advantage of the characteristics of X-ray waste images (e.g., heavy occlusions and penetration effect), we propose a new instance segmentation method to explicitly take these image characteristics into account. Specifically, our method adopts an easy-to-hard disassembling strategy to use high confidence predictions to guide the segmentation of highly overlapped objects, and a global structure guidance module to better capture the complex contour information caused by the penetration effect. Extensive experiments demonstrate the effectiveness of the proposed method. Our dataset is released at WIXRayNet.
包装废弃物的检验是废弃物处理流程中的重要环节。以前的方法要么依靠人工目视检查,要么依靠基于RGB图像的检测算法,需要昂贵的准备程序(例如,打开袋子并摊开废物)。此外,被遮挡的项目很可能被遗漏。由于x射线具有很强的穿透力,可以穿透袋子和重叠的物体,我们提出利用x射线图像高效地进行垃圾检查,而不需要打开袋子。我们在x射线图像中引入了一个实例级废物分割的新问题,用于智能废物检测,并提供了一个由5038张x射线图像(共30,881个废物项目)组成的真实数据集,其中包含高质量的注释(即废物类别,对象盒和实例级掩码)作为该问题的基准。由于现有的分割方法主要针对自然图像,无法利用x射线废弃物图像的特征(如重遮挡和穿透效应),我们提出了一种新的实例分割方法来明确考虑这些图像特征。具体而言,我们的方法采用易难拆解策略,利用高置信度预测指导对高度重叠目标的分割;采用全局结构引导模块,更好地捕获穿透效应导致的复杂轮廓信息。大量的实验证明了该方法的有效性。我们的数据集在WIXRayNet上发布。
{"title":"ETHSeg: An Amodel Instance Segmentation Network and a Real-world Dataset for X-Ray Waste Inspection","authors":"Lingteng Qiu, Zhangyang Xiong, Xuhao Wang, Kenkun Liu, Yihan Li, Guanying Chen, Xiaoguang Han, Shuguang Cui","doi":"10.1109/CVPR52688.2022.00232","DOIUrl":"https://doi.org/10.1109/CVPR52688.2022.00232","url":null,"abstract":"Waste inspection for packaged waste is an important step in the pipeline of waste disposal. Previous methods either rely on manual visual checking or RGB image-based inspection algorithm, requiring costly preparation procedures (e.g., open the bag and spread the waste items). Moreover, occluded items are very likely to be left out. Inspired by the fact that X-ray has a strong penetrating power to see through the bag and overlapping objects, we propose to perform waste inspection efficiently using X-ray images without the need to open the bag. We introduce a novel problem of instance-level waste segmentation in X-ray image for intelligent waste inspection, and contribute a real dataset consisting of 5,038 X-ray images (totally 30,881 waste items) with high-quality annotations (i.e., waste categories, object boxes, and instance-level masks) as a benchmark for this problem. As existing segmentation methods are mainly designed for natural images and cannot take advantage of the characteristics of X-ray waste images (e.g., heavy occlusions and penetration effect), we propose a new instance segmentation method to explicitly take these image characteristics into account. Specifically, our method adopts an easy-to-hard disassembling strategy to use high confidence predictions to guide the segmentation of highly overlapped objects, and a global structure guidance module to better capture the complex contour information caused by the penetration effect. Extensive experiments demonstrate the effectiveness of the proposed method. Our dataset is released at WIXRayNet.","PeriodicalId":355552,"journal":{"name":"2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)","volume":"2 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126031501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Semi-Supervised Few-shot Learning via Multi-Factor Clustering 基于多因素聚类的半监督少镜头学习
Pub Date : 2022-06-01 DOI: 10.1109/CVPR52688.2022.01416
Jie Ling, Lei Liao, Meng Yang, Jia Shuai
The scarcity of labeled data and the problem of model overfitting have been the challenges in few-shot learning. Recently, semi-supervised few-shot learning has been developed to obtain pseudo-labels of unlabeled samples for expanding the support set. However, the relationship between unlabeled and labeled data is not well exploited in generating pseudo labels, the noise of which will di-rectly harm the model learning. In this paper, we propose a Clustering-based semi-supervised Few-Shot Learning (cluster-FSL) method to solve the above problems in image classification. By using multi-factor collaborative representation, a novel Multi-Factor Clustering (MFC) is designed to fuse the information of few-shot data distribution, which can generate soft and hard pseudo-labels for unlabeled samples based on labeled data. And we exploit the pseudo labels of unlabeled samples by MFC to expand the support set for obtaining more distribution information. Furthermore, robust data augmentation is used for support set in the fine-tuning phase to increase the labeled samples' diversity. We verified the validity of the cluster-FSL by comparing it with other few-shot learning methods on three popular benchmark datasets, miniImageNet, tieredImageNet, and CUB-200-2011. The ablation experiments further demonstrate that our MFC can effectively fuse distribution information of labeled samples and provide high-quality pseudo-labels. Our code is available at: https://gitlab.com/smartllvlab/cluster-fsl
标记数据的稀缺性和模型过拟合问题一直是小样本学习面临的挑战。近年来,人们发展了半监督少射学习来获取未标记样本的伪标签,以扩展支持集。然而,在生成伪标签时,未标记数据和标记数据之间的关系没有得到很好的利用,伪标签的噪声将直接损害模型的学习。本文提出了一种基于聚类的半监督Few-Shot学习(cluster-FSL)方法来解决上述图像分类中的问题。通过多因素协同表示,设计了一种新的多因素聚类(MFC)方法,融合少量数据分布的信息,在标记数据的基础上对未标记样本生成软、硬伪标签。利用MFC对未标记样本的伪标签进行挖掘,扩展支持集,获得更多的分布信息。此外,在微调阶段对支持集进行鲁棒数据增强,以增加标记样本的多样性。我们通过在miniImageNet、tieredImageNet和CUB-200-2011这三个流行的基准数据集上与其他少镜头学习方法进行比较,验证了cluster-FSL的有效性。烧蚀实验进一步证明了我们的MFC可以有效地融合标记样本的分布信息,提供高质量的伪标签。我们的代码可在:https://gitlab.com/smartllvlab/cluster-fsl
{"title":"Semi-Supervised Few-shot Learning via Multi-Factor Clustering","authors":"Jie Ling, Lei Liao, Meng Yang, Jia Shuai","doi":"10.1109/CVPR52688.2022.01416","DOIUrl":"https://doi.org/10.1109/CVPR52688.2022.01416","url":null,"abstract":"The scarcity of labeled data and the problem of model overfitting have been the challenges in few-shot learning. Recently, semi-supervised few-shot learning has been developed to obtain pseudo-labels of unlabeled samples for expanding the support set. However, the relationship between unlabeled and labeled data is not well exploited in generating pseudo labels, the noise of which will di-rectly harm the model learning. In this paper, we propose a Clustering-based semi-supervised Few-Shot Learning (cluster-FSL) method to solve the above problems in image classification. By using multi-factor collaborative representation, a novel Multi-Factor Clustering (MFC) is designed to fuse the information of few-shot data distribution, which can generate soft and hard pseudo-labels for unlabeled samples based on labeled data. And we exploit the pseudo labels of unlabeled samples by MFC to expand the support set for obtaining more distribution information. Furthermore, robust data augmentation is used for support set in the fine-tuning phase to increase the labeled samples' diversity. We verified the validity of the cluster-FSL by comparing it with other few-shot learning methods on three popular benchmark datasets, miniImageNet, tieredImageNet, and CUB-200-2011. The ablation experiments further demonstrate that our MFC can effectively fuse distribution information of labeled samples and provide high-quality pseudo-labels. Our code is available at: https://gitlab.com/smartllvlab/cluster-fsl","PeriodicalId":355552,"journal":{"name":"2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126152173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
HLRTF: Hierarchical Low-Rank Tensor Factorization for Inverse Problems in Multi-Dimensional Imaging HLRTF:多维成像反问题的层次低秩张量分解
Pub Date : 2022-06-01 DOI: 10.1109/CVPR52688.2022.01870
Yisi Luo, Xile Zhao, Deyu Meng, Tai-Xiang Jiang
Inverse problems in multi-dimensional imaging, e.g., completion, denoising, and compressive sensing, are challenging owing to the big volume of the data and the inherent illposedness. To tackle these issues, this work unsuper-visedly learns a hierarchical low-rank tensor factorization (HLRTF) by solely using an observed multi-dimensional image. Specifically, we embed a deep neural network (DNN) into the tensor singular value decompositionframe-work and develop the HLRTF, which captures the underlying low-rank structures of multi-dimensional images with compact representation abilities. This DNN herein serves as a nonlinear transform from a vector to another to help obtain a better low-rank representation. Our HLRTF infers the parameters of the DNN and the underlying low-rank structure of the original data from its observation via the gradient descent using a non-reference loss function in an unsupervised manner. To address the vanishing gradient in extreme scenarios, e.g., structural missing pixels, we introduce a parametric total variation regularization to constrain the DNN parameters and the tensor factor parameters with theoretical analysis. We apply our HLRTF for typical inverse problems in multi-dimensional imaging including completion, denoising, and snapshot spectral imaging, which demonstrates its generality and wide applicability. Extensive results illustrate the superiority of our method as compared with state-of-the-art methods.
由于数据量大和固有的病态性,多维成像中的逆问题(如补全、去噪和压缩感知)具有挑战性。为了解决这些问题,这项工作通过单独使用观察到的多维图像无监督地学习分层低秩张量分解(HLRTF)。具体来说,我们将一个深度神经网络(DNN)嵌入到张量奇异值分解框架中,并开发了HLRTF,它以紧凑的表示能力捕获多维图像的底层低秩结构。这里的深度神经网络作为从一个向量到另一个向量的非线性变换,以帮助获得更好的低秩表示。我们的HLRTF通过使用非参考损失函数以无监督的方式通过梯度下降从其观测中推断DNN的参数和原始数据的底层低秩结构。为了解决结构缺失像素等极端情况下的梯度消失问题,通过理论分析,引入参数全变分正则化来约束DNN参数和张量因子参数。我们将HLRTF应用于包括补全、去噪和快照光谱成像在内的多维成像中的典型逆问题,证明了它的通用性和广泛的适用性。大量的结果表明,与最先进的方法相比,我们的方法具有优越性。
{"title":"HLRTF: Hierarchical Low-Rank Tensor Factorization for Inverse Problems in Multi-Dimensional Imaging","authors":"Yisi Luo, Xile Zhao, Deyu Meng, Tai-Xiang Jiang","doi":"10.1109/CVPR52688.2022.01870","DOIUrl":"https://doi.org/10.1109/CVPR52688.2022.01870","url":null,"abstract":"Inverse problems in multi-dimensional imaging, e.g., completion, denoising, and compressive sensing, are challenging owing to the big volume of the data and the inherent illposedness. To tackle these issues, this work unsuper-visedly learns a hierarchical low-rank tensor factorization (HLRTF) by solely using an observed multi-dimensional image. Specifically, we embed a deep neural network (DNN) into the tensor singular value decompositionframe-work and develop the HLRTF, which captures the underlying low-rank structures of multi-dimensional images with compact representation abilities. This DNN herein serves as a nonlinear transform from a vector to another to help obtain a better low-rank representation. Our HLRTF infers the parameters of the DNN and the underlying low-rank structure of the original data from its observation via the gradient descent using a non-reference loss function in an unsupervised manner. To address the vanishing gradient in extreme scenarios, e.g., structural missing pixels, we introduce a parametric total variation regularization to constrain the DNN parameters and the tensor factor parameters with theoretical analysis. We apply our HLRTF for typical inverse problems in multi-dimensional imaging including completion, denoising, and snapshot spectral imaging, which demonstrates its generality and wide applicability. Extensive results illustrate the superiority of our method as compared with state-of-the-art methods.","PeriodicalId":355552,"journal":{"name":"2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115174666","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Rethinking Controllable Variational Autoencoders 重新思考可控变分自编码器
Pub Date : 2022-06-01 DOI: 10.1109/CVPR52688.2022.01865
Huajie Shao, Yifei Yang, Haohong Lin, Longzhong Lin, Yizhuo Chen, Qinmin Yang, Han Zhao
The Controllable Variational Autoencoder (ControlVAE) combines automatic control theory with the basic VAE model to manipulate the KL-divergence for overcoming posterior collapse and learning disentangled representations. It has shown success in a variety of applications, such as image generation, disentangled representation learning, and language modeling. However, when it comes to disentangled representation learning, ControlVAE does not delve into the rationale behind it. The goal of this paper is to develop a deeper understanding of ControlVAE in learning disentangled representations, including the choice of a desired KL-divergence (i.e, set point), and its stability during training. We first fundamentally explain its ability to disentangle latent variables from an information bottleneck perspective. We show that KL-divergence is an upper bound of the variational information bottleneck. By controlling the KL-divergence gradually from a small value to a target value, ControlVAE can disentangle the latent factors one by one. Based on this finding, we propose a new DynamicVAE that leverages a modified incremental PI (proportionalintegral) controller, a variant of the proportional-integralderivative (PID) algorithm, and employs a moving average as well as a hybrid annealing method to evolve the value of KL-divergence smoothly in a tightly controlled fashion. In addition, we analytically derive a lower bound of the set point for disentangling. We then theoretically prove the stability of the proposed approach. Evaluation results on multiple benchmark datasets demonstrate that DynamicVAE achieves a good trade-off between the disentanglement and reconstruction quality. We also discover that it can separate disentangled representation learning and re-construction via manipulating the desired KL-divergence.
可控变分自编码器(ControlVAE)将自动控制理论与基本的变分自编码器模型相结合,通过控制kl散度来克服后验崩溃和学习解纠缠表征。它已经在各种应用中取得了成功,例如图像生成、解纠缠表示学习和语言建模。然而,当涉及到解纠缠表示学习时,ControlVAE并没有深入研究其背后的原理。本文的目标是在学习解纠缠表示时对ControlVAE有更深的理解,包括选择期望的kl -散度(即设定点),以及它在训练期间的稳定性。我们首先从根本上解释了它从信息瓶颈的角度来解开潜在变量的能力。我们证明了kl -散度是变分信息瓶颈的上界。ControlVAE通过控制kl -散度从一个小值逐渐到一个目标值,可以逐一解开潜在因素。基于这一发现,我们提出了一种新的动态vae,它利用了一种改进的增量PI(比例积分)控制器,一种比例积分导数(PID)算法的变体,并采用移动平均和混合退火方法以严格控制的方式平滑地演化kl -散度的值。此外,我们解析地导出了解纠缠的设定点的下界。然后从理论上证明了所提方法的稳定性。在多个基准数据集上的评估结果表明,动态vae在解纠缠和重建质量之间取得了很好的平衡。我们还发现它可以通过操纵期望的kl -散度来分离解纠缠表征学习和重建。
{"title":"Rethinking Controllable Variational Autoencoders","authors":"Huajie Shao, Yifei Yang, Haohong Lin, Longzhong Lin, Yizhuo Chen, Qinmin Yang, Han Zhao","doi":"10.1109/CVPR52688.2022.01865","DOIUrl":"https://doi.org/10.1109/CVPR52688.2022.01865","url":null,"abstract":"The Controllable Variational Autoencoder (ControlVAE) combines automatic control theory with the basic VAE model to manipulate the KL-divergence for overcoming posterior collapse and learning disentangled representations. It has shown success in a variety of applications, such as image generation, disentangled representation learning, and language modeling. However, when it comes to disentangled representation learning, ControlVAE does not delve into the rationale behind it. The goal of this paper is to develop a deeper understanding of ControlVAE in learning disentangled representations, including the choice of a desired KL-divergence (i.e, set point), and its stability during training. We first fundamentally explain its ability to disentangle latent variables from an information bottleneck perspective. We show that KL-divergence is an upper bound of the variational information bottleneck. By controlling the KL-divergence gradually from a small value to a target value, ControlVAE can disentangle the latent factors one by one. Based on this finding, we propose a new DynamicVAE that leverages a modified incremental PI (proportionalintegral) controller, a variant of the proportional-integralderivative (PID) algorithm, and employs a moving average as well as a hybrid annealing method to evolve the value of KL-divergence smoothly in a tightly controlled fashion. In addition, we analytically derive a lower bound of the set point for disentangling. We then theoretically prove the stability of the proposed approach. Evaluation results on multiple benchmark datasets demonstrate that DynamicVAE achieves a good trade-off between the disentanglement and reconstruction quality. We also discover that it can separate disentangled representation learning and re-construction via manipulating the desired KL-divergence.","PeriodicalId":355552,"journal":{"name":"2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115335439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Improving Adversarially Robust Few-shot Image Classification with Generalizable Representations 基于可泛化表示的对抗鲁棒少射图像分类改进
Pub Date : 2022-06-01 DOI: 10.1109/CVPR52688.2022.00882
Junhao Dong, Yuan Wang, Jianhuang Lai, Xiaohua Xie
Few-Shot Image Classification (FSIC) aims to recognize novel image classes with limited data, which is significant in practice. In this paper, we consider the FSIC problem in the case of adversarial examples. This is an extremely challenging issue because current deep learning methods are still vulnerable when handling adversarial examples, even with massive labeled training samples. For this problem, existing works focus on training a network in the meta-learning fashion that depends on numerous sampled few-shot tasks. In comparison, we propose a simple but effective baseline through directly learning generalizable representations without tedious task sampling, which is robust to unforeseen adversarial FSIC tasks. Specifically, we introduce an adversarial-aware mechanism to establish auxiliary supervision via feature-level differences between legitimate and adversarial examples. Furthermore, we design a novel adversarial-reweighted training manner to alleviate the imbalance among adversarial examples. The feature purifier is also employed as post-processing for adversarial features. Moreover, our method can obtain generalizable representations to remain superior transferability, even facing cross-domain adversarial examples. Extensive experiments show that our method can significantly outperform state-of-the-art adversarially robust FSIC methods on two standard benchmarks.
少量图像分类(Few-Shot Image Classification, FSIC)旨在用有限的数据识别新的图像类别,在实际应用中具有重要意义。在本文中,我们考虑了对抗性例子情况下的FSIC问题。这是一个极具挑战性的问题,因为当前的深度学习方法在处理对抗性示例时仍然很脆弱,即使有大量标记的训练样本。对于这个问题,现有的工作主要集中在以元学习的方式训练网络,这种方式依赖于大量的采样任务。相比之下,我们提出了一个简单而有效的基线,通过直接学习可推广的表示,而不需要繁琐的任务采样,这对不可预见的对抗性FSIC任务具有鲁棒性。具体来说,我们引入了一种对抗感知机制,通过合法和对抗示例之间的特征级别差异来建立辅助监督。此外,我们设计了一种新的对抗性重加权训练方法,以减轻对抗性样本之间的不平衡。特征净化器也被用作对抗性特征的后处理。此外,即使面对跨领域的对抗示例,我们的方法也可以获得可推广的表示以保持优越的可转移性。大量的实验表明,我们的方法在两个标准基准上明显优于最先进的对抗鲁棒FSIC方法。
{"title":"Improving Adversarially Robust Few-shot Image Classification with Generalizable Representations","authors":"Junhao Dong, Yuan Wang, Jianhuang Lai, Xiaohua Xie","doi":"10.1109/CVPR52688.2022.00882","DOIUrl":"https://doi.org/10.1109/CVPR52688.2022.00882","url":null,"abstract":"Few-Shot Image Classification (FSIC) aims to recognize novel image classes with limited data, which is significant in practice. In this paper, we consider the FSIC problem in the case of adversarial examples. This is an extremely challenging issue because current deep learning methods are still vulnerable when handling adversarial examples, even with massive labeled training samples. For this problem, existing works focus on training a network in the meta-learning fashion that depends on numerous sampled few-shot tasks. In comparison, we propose a simple but effective baseline through directly learning generalizable representations without tedious task sampling, which is robust to unforeseen adversarial FSIC tasks. Specifically, we introduce an adversarial-aware mechanism to establish auxiliary supervision via feature-level differences between legitimate and adversarial examples. Furthermore, we design a novel adversarial-reweighted training manner to alleviate the imbalance among adversarial examples. The feature purifier is also employed as post-processing for adversarial features. Moreover, our method can obtain generalizable representations to remain superior transferability, even facing cross-domain adversarial examples. Extensive experiments show that our method can significantly outperform state-of-the-art adversarially robust FSIC methods on two standard benchmarks.","PeriodicalId":355552,"journal":{"name":"2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115575508","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
MPC: Multi-view Probabilistic Clustering 多视图概率聚类
Pub Date : 2022-06-01 DOI: 10.1109/CVPR52688.2022.00929
Junjie Liu, Junlong Liu, Shaotian Yan, Rongxin Jiang, Xiang Tian, Boxuan Gu, Yao-wu Chen, Chen Shen, Jianqiang Huang
Despite the promising progress having been made, the two challenges of multi-view clustering (MVC) are still waiting for better solutions: i) Most existing methods are either not qualified or require additional steps for incomplete multi-view clustering and ii) noise or outliers might significantly degrade the overall clustering performance. In this paper, we propose a novel unified framework for incomplete and complete MVC named multi-view probabilistic clustering (MPC). MPC equivalently transforms multi-view pairwise posterior matching probability into composition of each view's individual distribution, which tolerates data missing and might extend to any number of views. Then graph-context-aware refinement with path propagation and co-neighbor propagation is used to refine pairwise probability, which alleviates the impact of noise and outliers. Finally, MPC also equivalently transforms probabilistic clustering's objective to avoid complete pairwise computation and adjusts clustering assignments by maximizing joint probability iteratively. Extensive experiments on multiple benchmarks for incomplete and complete MVC show that MPC significantly outperforms previous state-of-the-art methods in both effectiveness and efficiency.
尽管已经取得了有希望的进展,但多视图聚类(MVC)的两个挑战仍有待更好的解决方案:i)大多数现有方法要么不合格,要么需要额外的步骤来完成不完整的多视图聚类;ii)噪声或异常值可能会显著降低整体聚类性能。本文提出了一种新的不完整MVC和完整MVC的统一框架——多视图概率聚类(MPC)。MPC等效地将多视图成对后验匹配概率转换为每个视图的个体分布的组成,它允许数据丢失并可能扩展到任意数量的视图。然后利用路径传播和共邻居传播的图上下文感知细化方法对两两概率进行细化,减轻了噪声和离群值的影响。最后,MPC还等效地将概率聚类的目标转化为避免完全成对计算,并通过迭代最大化联合概率来调整聚类分配。在不完整和完整MVC的多个基准测试中进行的大量实验表明,MPC在有效性和效率方面都明显优于以前最先进的方法。
{"title":"MPC: Multi-view Probabilistic Clustering","authors":"Junjie Liu, Junlong Liu, Shaotian Yan, Rongxin Jiang, Xiang Tian, Boxuan Gu, Yao-wu Chen, Chen Shen, Jianqiang Huang","doi":"10.1109/CVPR52688.2022.00929","DOIUrl":"https://doi.org/10.1109/CVPR52688.2022.00929","url":null,"abstract":"Despite the promising progress having been made, the two challenges of multi-view clustering (MVC) are still waiting for better solutions: i) Most existing methods are either not qualified or require additional steps for incomplete multi-view clustering and ii) noise or outliers might significantly degrade the overall clustering performance. In this paper, we propose a novel unified framework for incomplete and complete MVC named multi-view probabilistic clustering (MPC). MPC equivalently transforms multi-view pairwise posterior matching probability into composition of each view's individual distribution, which tolerates data missing and might extend to any number of views. Then graph-context-aware refinement with path propagation and co-neighbor propagation is used to refine pairwise probability, which alleviates the impact of noise and outliers. Finally, MPC also equivalently transforms probabilistic clustering's objective to avoid complete pairwise computation and adjusts clustering assignments by maximizing joint probability iteratively. Extensive experiments on multiple benchmarks for incomplete and complete MVC show that MPC significantly outperforms previous state-of-the-art methods in both effectiveness and efficiency.","PeriodicalId":355552,"journal":{"name":"2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122383961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
How Much More Data Do I Need? Estimating Requirements for Downstream Tasks 我还需要多少数据?评估下游任务的需求
Pub Date : 2022-06-01 DOI: 10.1109/CVPR52688.2022.00037
Rafid Mahmood, James Lucas, David Acuna, Daiqing Li, Jonah Philion, J. M. Álvarez, Zhiding Yu, S. Fidler, M. Law
Given a small training data set and a learning algorithm, how much more data is necessary to reach a target validation or test performance? This question is of critical importance in applications such as autonomous driving or medical imaging where collecting data is expensive and time-consuming. Overestimating or underestimating data requirements incurs substantial costs that could be avoided with an adequate budget. Prior work on neural scaling laws suggest that the power-law function can fit the validation performance curve and extrapolate it to larger data set sizes. We find that this does not immediately translate to the more difficult downstream task of estimating the required data set size to meet a target performance. In this work, we consider a broad class of computer vision tasks and systematically investigate a family of functions that generalize the power-law function to allow for better estimation of data requirements. Finally, we show that incorporating a tuned correction factor and collecting over multiple rounds significantly improves the performance of the data estimators. Using our guidelines, practitioners can accurately estimate data requirements of machine learning systems to gain savings in both development time and data acquisition costs.
给定一个小的训练数据集和一个学习算法,需要多少数据才能达到目标验证或测试性能?这个问题在自动驾驶或医疗成像等应用中至关重要,因为这些应用收集数据既昂贵又耗时。高估或低估数据需求会导致大量的成本,而这些成本是可以通过适当的预算来避免的。先前对神经尺度律的研究表明幂律函数可以拟合验证性能曲线,并将其外推到更大的数据集规模。我们发现,这并不能立即转化为更困难的下游任务,即估计所需的数据集大小以满足目标性能。在这项工作中,我们考虑了一类广泛的计算机视觉任务,并系统地研究了一系列函数,这些函数可以推广幂律函数,以便更好地估计数据需求。最后,我们表明,结合一个调整的校正因子和多次收集显著提高了数据估计器的性能。使用我们的指南,从业者可以准确地估计机器学习系统的数据需求,以节省开发时间和数据获取成本。
{"title":"How Much More Data Do I Need? Estimating Requirements for Downstream Tasks","authors":"Rafid Mahmood, James Lucas, David Acuna, Daiqing Li, Jonah Philion, J. M. Álvarez, Zhiding Yu, S. Fidler, M. Law","doi":"10.1109/CVPR52688.2022.00037","DOIUrl":"https://doi.org/10.1109/CVPR52688.2022.00037","url":null,"abstract":"Given a small training data set and a learning algorithm, how much more data is necessary to reach a target validation or test performance? This question is of critical importance in applications such as autonomous driving or medical imaging where collecting data is expensive and time-consuming. Overestimating or underestimating data requirements incurs substantial costs that could be avoided with an adequate budget. Prior work on neural scaling laws suggest that the power-law function can fit the validation performance curve and extrapolate it to larger data set sizes. We find that this does not immediately translate to the more difficult downstream task of estimating the required data set size to meet a target performance. In this work, we consider a broad class of computer vision tasks and systematically investigate a family of functions that generalize the power-law function to allow for better estimation of data requirements. Finally, we show that incorporating a tuned correction factor and collecting over multiple rounds significantly improves the performance of the data estimators. Using our guidelines, practitioners can accurately estimate data requirements of machine learning systems to gain savings in both development time and data acquisition costs.","PeriodicalId":355552,"journal":{"name":"2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122661057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Alleviating Semantics Distortion in Unsupervised Low-Level Image-to-Image Translation via Structure Consistency Constraint 基于结构一致性约束的无监督低层次图像到图像翻译中的语义失真
Pub Date : 2022-06-01 DOI: 10.1109/CVPR52688.2022.01771
Jiaxian Guo, Jiacheng Li, Huan Fu, Mingming Gong, Kun Zhang, Dacheng Tao
Unsupervised image-to-image (I21) translation aims to learn a domain mapping function that can preserve the semantics of the input images without paired data. However, because the underlying semantics distributions in the source and target domains are often mismatched, current distribution matching-based methods may distort the semantics when matching distributions, resulting in the inconsistency between the input and translated images, which is known as the semantics distortion problem. In this paper, we focus on the low-level I21 translation, where the structure of images is highly related to their semantics. To alleviate semantic distortions in such translation tasks without paired supervision, we propose a novel I21 translation constraint, called Structure Consistency Constraint (SCC), to promote the consistency of image structures by reducing the randomness of color transformation in the translation process. To facilitate estimation and maximization of SCC, we propose an approximate representation of mutual information called relative Squared-loss Mutual Information (rSMI) that enjoys efficient analytic solutions. Our SCC can be easily incorporated into most existing translation models. Quantitative and qualitative comparisons on a range of low-level I21 translation tasks show that translation models with SCC outperform the original models by a significant margin with little additional computational and memory costs.
无监督图像到图像(I21)翻译旨在学习一种域映射函数,该函数可以在没有配对数据的情况下保留输入图像的语义。然而,由于源域和目标域的底层语义分布往往不匹配,目前基于分布匹配的方法在匹配分布时可能会导致语义扭曲,导致输入图像和翻译图像不一致,这就是语义失真问题。在本文中,我们关注的是低层次的I21翻译,其中图像的结构与其语义高度相关。为了减轻这种没有成对监督的翻译任务中的语义扭曲,我们提出了一种新的翻译约束,称为结构一致性约束(SCC),通过减少翻译过程中颜色变换的随机性来促进图像结构的一致性。为了方便估计和最大化SCC,我们提出了一种互信息的近似表示,称为相对平方损耗互信息(rSMI),具有有效的解析解。我们的SCC可以很容易地整合到大多数现有的翻译模型中。对一系列低水平I21翻译任务的定量和定性比较表明,具有SCC的翻译模型在几乎没有额外计算和内存成本的情况下显著优于原始模型。
{"title":"Alleviating Semantics Distortion in Unsupervised Low-Level Image-to-Image Translation via Structure Consistency Constraint","authors":"Jiaxian Guo, Jiacheng Li, Huan Fu, Mingming Gong, Kun Zhang, Dacheng Tao","doi":"10.1109/CVPR52688.2022.01771","DOIUrl":"https://doi.org/10.1109/CVPR52688.2022.01771","url":null,"abstract":"Unsupervised image-to-image (I21) translation aims to learn a domain mapping function that can preserve the semantics of the input images without paired data. However, because the underlying semantics distributions in the source and target domains are often mismatched, current distribution matching-based methods may distort the semantics when matching distributions, resulting in the inconsistency between the input and translated images, which is known as the semantics distortion problem. In this paper, we focus on the low-level I21 translation, where the structure of images is highly related to their semantics. To alleviate semantic distortions in such translation tasks without paired supervision, we propose a novel I21 translation constraint, called Structure Consistency Constraint (SCC), to promote the consistency of image structures by reducing the randomness of color transformation in the translation process. To facilitate estimation and maximization of SCC, we propose an approximate representation of mutual information called relative Squared-loss Mutual Information (rSMI) that enjoys efficient analytic solutions. Our SCC can be easily incorporated into most existing translation models. Quantitative and qualitative comparisons on a range of low-level I21 translation tasks show that translation models with SCC outperform the original models by a significant margin with little additional computational and memory costs.","PeriodicalId":355552,"journal":{"name":"2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122461855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Towards Efficient Data Free Blackbox Adversarial Attack 迈向有效的无数据黑箱对抗攻击
Pub Date : 2022-06-01 DOI: 10.1109/CVPR52688.2022.01469
J Zhang, Bo Li, Jianghe Xu, Shuang Wu, Shouhong Ding, Lei Zhang, Chao Wu
Classic black-box adversarial attacks can take advantage of transferable adversarial examples generated by a similar substitute model to successfully fool the target model. However, these substitute models need to be trained by target models' training data, which is hard to acquire due to privacy or transmission reasons. Recognizing the limited availability of real data for adversarial queries, recent works proposed to train substitute models in a data-free black-box scenario. However, their generative adversarial networks (GANs) based framework suffers from the convergence failure and the model collapse, resulting in low efficiency. In this paper, by rethinking the collaborative relationship between the generator and the substitute model, we design a novel black-box attack framework. The proposed method can efficiently imitate the target model through a small number of queries and achieve high attack success rate. The comprehensive experiments over six datasets demonstrate the effectiveness of our method against the state-of-the-art attacks. Especially, we conduct both label-only and probability-only attacks on the Microsoft Azure online model, and achieve a 100% attack success rate with only 0.46% query budget of the SOTA method [49].
经典的黑盒对抗攻击可以利用由类似替代模型生成的可转移的对抗示例来成功地欺骗目标模型。然而,这些替代模型需要通过目标模型的训练数据进行训练,由于隐私或传输等原因,这些训练数据很难获得。认识到对抗性查询的真实数据的有限可用性,最近的工作提出在无数据的黑箱场景中训练替代模型。然而,基于生成对抗网络(GANs)的框架存在收敛失败和模型崩溃的问题,导致效率低下。本文通过重新思考生成模型与替代模型之间的协作关系,设计了一种新的黑盒攻击框架。该方法可以通过少量的查询,有效地模拟目标模型,达到较高的攻击成功率。在六个数据集上的综合实验证明了我们的方法对最先进的攻击的有效性。特别是,我们对Microsoft Azure在线模型进行了纯标签攻击和纯概率攻击,并且SOTA方法的查询预算仅为0.46%,攻击成功率为100%[49]。
{"title":"Towards Efficient Data Free Blackbox Adversarial Attack","authors":"J Zhang, Bo Li, Jianghe Xu, Shuang Wu, Shouhong Ding, Lei Zhang, Chao Wu","doi":"10.1109/CVPR52688.2022.01469","DOIUrl":"https://doi.org/10.1109/CVPR52688.2022.01469","url":null,"abstract":"Classic black-box adversarial attacks can take advantage of transferable adversarial examples generated by a similar substitute model to successfully fool the target model. However, these substitute models need to be trained by target models' training data, which is hard to acquire due to privacy or transmission reasons. Recognizing the limited availability of real data for adversarial queries, recent works proposed to train substitute models in a data-free black-box scenario. However, their generative adversarial networks (GANs) based framework suffers from the convergence failure and the model collapse, resulting in low efficiency. In this paper, by rethinking the collaborative relationship between the generator and the substitute model, we design a novel black-box attack framework. The proposed method can efficiently imitate the target model through a small number of queries and achieve high attack success rate. The comprehensive experiments over six datasets demonstrate the effectiveness of our method against the state-of-the-art attacks. Especially, we conduct both label-only and probability-only attacks on the Microsoft Azure online model, and achieve a 100% attack success rate with only 0.46% query budget of the SOTA method [49].","PeriodicalId":355552,"journal":{"name":"2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)","volume":"753 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122977634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 35
期刊
2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1