首页 > 最新文献

IEEE Transactions on Pattern Analysis and Machine Intelligence最新文献

英文 中文
Vertical Layering of Quantized Neural Networks for Heterogeneous Inference 用于异构推理的量化神经网络的垂直分层
IF 23.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2022-12-10 DOI: 10.48550/arXiv.2212.05326
Hai Wu, Ruifei He, Hao Hao Tan, Xiaojuan Qi, Kaibin Huang
Although considerable progress has been obtained in neural network quantization for efficient inference, existing methods are not scalable to heterogeneous devices as one dedicated model needs to be trained, transmitted, and stored for one specific hardware setting, incurring considerable costs in model training and maintenance. In this paper, we study a new vertical-layered representation of neural network weights for encapsulating all quantized models into a single one. It represents weights as a group of bits (vertical layers) organized from the most significant bit (also called the basic layer) to less significant bits (enhance layers). Hence, a neural network with an arbitrary quantization precision can be obtained by adding corresponding enhance layers to the basic layer. However, we empirically find that models obtained with existing quantization methods suffer severe performance degradation if adapted to vertical-layered weight representation. To this end, we propose a simple once quantization-aware training (QAT) scheme for obtaining high-performance vertical-layered models. Our design incorporates a cascade downsampling mechanism with the multi-objective optimization employed to train the shared source model weights such that they can be updated simultaneously, considering the performance of all networks. After the model is trained, to construct a vertical-layered network, the lowest bit-width quantized weights become the basic layer, and every bit dropped along the downsampling process act as an enhance layer. Our design is extensively evaluated on CIFAR-100 and ImageNet datasets. Experiments show that the proposed vertical-layered representation and developed once QAT scheme are effective in embodying multiple quantized networks into a single one and allow one-time training, and it delivers comparable performance as that of quantized models tailored to any specific bit-width.
尽管神经网络量化在高效推理方面取得了相当大的进展,但现有方法不能扩展到异构设备,因为需要针对特定的硬件设置训练、传输和存储一个专用模型,这在模型训练和维护方面产生了相当大的成本。在本文中,我们研究了一种新的神经网络权重的垂直分层表示,用于将所有量化模型封装为单个模型。它将权重表示为一组位(垂直层),从最重要的位(也称为基础层)到不重要的位(增强层)进行组织。因此,通过在基本层上增加相应的增强层,可以得到任意量化精度的神经网络。然而,我们的经验发现,使用现有量化方法获得的模型如果适应垂直分层权重表示,则会出现严重的性能下降。为此,我们提出了一种简单的一次性量化感知训练(QAT)方案来获得高性能的垂直分层模型。我们的设计结合了级联下采样机制和多目标优化,用于训练共享源模型权重,使它们可以同时更新,同时考虑到所有网络的性能。模型训练完成后,构建垂直分层网络,将比特宽度最小的量化权值作为基础层,下采样过程中丢失的每一个比特作为增强层。我们的设计在CIFAR-100和ImageNet数据集上进行了广泛的评估。实验表明,所提出的垂直分层表示和开发的一次QAT方案可以有效地将多个量化网络体现为单个网络,并允许一次性训练,并且其性能与针对任何特定位宽定制的量化模型相当。
{"title":"Vertical Layering of Quantized Neural Networks for Heterogeneous Inference","authors":"Hai Wu, Ruifei He, Hao Hao Tan, Xiaojuan Qi, Kaibin Huang","doi":"10.48550/arXiv.2212.05326","DOIUrl":"https://doi.org/10.48550/arXiv.2212.05326","url":null,"abstract":"Although considerable progress has been obtained in neural network quantization for efficient inference, existing methods are not scalable to heterogeneous devices as one dedicated model needs to be trained, transmitted, and stored for one specific hardware setting, incurring considerable costs in model training and maintenance. In this paper, we study a new vertical-layered representation of neural network weights for encapsulating all quantized models into a single one. It represents weights as a group of bits (vertical layers) organized from the most significant bit (also called the basic layer) to less significant bits (enhance layers). Hence, a neural network with an arbitrary quantization precision can be obtained by adding corresponding enhance layers to the basic layer. However, we empirically find that models obtained with existing quantization methods suffer severe performance degradation if adapted to vertical-layered weight representation. To this end, we propose a simple once quantization-aware training (QAT) scheme for obtaining high-performance vertical-layered models. Our design incorporates a cascade downsampling mechanism with the multi-objective optimization employed to train the shared source model weights such that they can be updated simultaneously, considering the performance of all networks. After the model is trained, to construct a vertical-layered network, the lowest bit-width quantized weights become the basic layer, and every bit dropped along the downsampling process act as an enhance layer. Our design is extensively evaluated on CIFAR-100 and ImageNet datasets. Experiments show that the proposed vertical-layered representation and developed once QAT scheme are effective in embodying multiple quantized networks into a single one and allow one-time training, and it delivers comparable performance as that of quantized models tailored to any specific bit-width.","PeriodicalId":13426,"journal":{"name":"IEEE Transactions on Pattern Analysis and Machine Intelligence","volume":" ","pages":""},"PeriodicalIF":23.6,"publicationDate":"2022-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48044755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PAC-Bayes Bounds for Bandit Problems: A Survey and Experimental Comparison Bandit问题的PAC Bayes界:综述和实验比较
IF 23.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2022-11-29 DOI: 10.48550/arXiv.2211.16110
H. Flynn, D. Reeb, M. Kandemir, Jan Peters
PAC-Bayes has recently re-emerged as an effective theory with which one can derive principled learning algorithms with tight performance guarantees. However, applications of PAC-Bayes to bandit problems are relatively rare, which is a great misfortune. Many decision-making problems in healthcare, finance and natural sciences can be modelled as bandit problems. In many of these applications, principled algorithms with strong performance guarantees would be very much appreciated. This survey provides an overview of PAC-Bayes bounds for bandit problems and an experimental comparison of these bounds. On the one hand, we found that PAC-Bayes bounds are a useful tool for designing offline bandit algorithms with performance guarantees. In our experiments, a PAC-Bayesian offline contextual bandit algorithm was able to learn randomised neural network polices with competitive expected reward and non-vacuous performance guarantees. On the other hand, the PAC-Bayesian online bandit algorithms that we tested had loose cumulative regret bounds. We conclude by discussing some topics for future work on PAC-Bayesian bandit algorithms.
PAC-Bayes最近作为一种有效的理论重新出现,人们可以通过它推导出具有严格性能保证的原则性学习算法。然而,PAC-Bayes在土匪问题上的应用相对较少,这是一个很大的不幸。医疗保健、金融和自然科学领域的许多决策问题都可以建模为强盗问题。在许多这样的应用程序中,具有强大性能保证的原则算法将非常受欢迎。本调查提供了PAC-Bayes边界的概述,为土匪问题和这些边界的实验比较。一方面,我们发现PAC-Bayes边界是设计具有性能保证的离线强盗算法的有用工具。在我们的实验中,PAC-Bayesian离线上下文强盗算法能够学习具有竞争性预期奖励和非空洞性能保证的随机神经网络策略。另一方面,我们测试的PAC-Bayesian在线强盗算法具有松散的累积后悔界限。最后,我们讨论了pac -贝叶斯强盗算法未来工作的一些主题。
{"title":"PAC-Bayes Bounds for Bandit Problems: A Survey and Experimental Comparison","authors":"H. Flynn, D. Reeb, M. Kandemir, Jan Peters","doi":"10.48550/arXiv.2211.16110","DOIUrl":"https://doi.org/10.48550/arXiv.2211.16110","url":null,"abstract":"PAC-Bayes has recently re-emerged as an effective theory with which one can derive principled learning algorithms with tight performance guarantees. However, applications of PAC-Bayes to bandit problems are relatively rare, which is a great misfortune. Many decision-making problems in healthcare, finance and natural sciences can be modelled as bandit problems. In many of these applications, principled algorithms with strong performance guarantees would be very much appreciated. This survey provides an overview of PAC-Bayes bounds for bandit problems and an experimental comparison of these bounds. On the one hand, we found that PAC-Bayes bounds are a useful tool for designing offline bandit algorithms with performance guarantees. In our experiments, a PAC-Bayesian offline contextual bandit algorithm was able to learn randomised neural network polices with competitive expected reward and non-vacuous performance guarantees. On the other hand, the PAC-Bayesian online bandit algorithms that we tested had loose cumulative regret bounds. We conclude by discussing some topics for future work on PAC-Bayesian bandit algorithms.","PeriodicalId":13426,"journal":{"name":"IEEE Transactions on Pattern Analysis and Machine Intelligence","volume":" ","pages":""},"PeriodicalIF":23.6,"publicationDate":"2022-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43879566","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Dynamic Loss For Robust Learning 鲁棒学习的动态损失
IF 23.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2022-11-22 DOI: 10.48550/arXiv.2211.12506
Shenwang Jiang, Jianan Li, Jizhou Zhang, Ying Wang, Tingfa Xu
Label noise and class imbalance are common challenges encountered in real-world datasets. Existing approaches for robust learning often focus on addressing either label noise or class imbalance individually, resulting in suboptimal performance when both biases are present. To bridge this gap, this work introduces a novel meta-learning-based dynamic loss that adapts the objective functions during the training process to effectively learn a classifier from long-tailed noisy data. Specifically, our dynamic loss consists of two components: a label corrector and a margin generator. The label corrector is responsible for correcting noisy labels, while the margin generator generates per-class classification margins by capturing the underlying data distribution and the learning state of the classifier. In addition, we employ a hierarchical sampling strategy that enriches a small amount of unbiased metadata with diverse and challenging samples. This enables the joint optimization of the two components in the dynamic loss through meta-learning, allowing the classifier to effectively adapt to clean and balanced test data. Extensive experiments conducted on multiple real-world and synthetic datasets with various types of data biases, including CIFAR-10/100, Animal-10N, ImageNet-LT, and Webvision, demonstrate that our method achieves state-of-the-art accuracy. The code for our approach will soon be made publicly available.
标签噪声和类不平衡是在现实世界数据集中遇到的常见挑战。现有的鲁棒学习方法通常侧重于单独解决标签噪声或类不平衡,当这两种偏差都存在时,会导致性能不佳。为了弥补这一差距,本工作引入了一种新的基于元学习的动态损失方法,该方法在训练过程中调整目标函数,从而有效地从长尾噪声数据中学习分类器。具体来说,我们的动态损失由两个部分组成:一个标签校正器和一个边际生成器。标签校正器负责校正有噪声的标签,而边界生成器通过捕获底层数据分布和分类器的学习状态来生成每个类的分类边界。此外,我们采用分层抽样策略,通过多样化和具有挑战性的样本丰富少量无偏元数据。这可以通过元学习对动态损失中的两个分量进行联合优化,使分类器能够有效地适应干净平衡的测试数据。在包括CIFAR-10/100、Animal-10N、ImageNet-LT和Webvision在内的多个具有不同类型数据偏差的真实世界和合成数据集上进行的大量实验表明,我们的方法达到了最先进的精度。我们的方法的代码将很快公开。
{"title":"Dynamic Loss For Robust Learning","authors":"Shenwang Jiang, Jianan Li, Jizhou Zhang, Ying Wang, Tingfa Xu","doi":"10.48550/arXiv.2211.12506","DOIUrl":"https://doi.org/10.48550/arXiv.2211.12506","url":null,"abstract":"Label noise and class imbalance are common challenges encountered in real-world datasets. Existing approaches for robust learning often focus on addressing either label noise or class imbalance individually, resulting in suboptimal performance when both biases are present. To bridge this gap, this work introduces a novel meta-learning-based dynamic loss that adapts the objective functions during the training process to effectively learn a classifier from long-tailed noisy data. Specifically, our dynamic loss consists of two components: a label corrector and a margin generator. The label corrector is responsible for correcting noisy labels, while the margin generator generates per-class classification margins by capturing the underlying data distribution and the learning state of the classifier. In addition, we employ a hierarchical sampling strategy that enriches a small amount of unbiased metadata with diverse and challenging samples. This enables the joint optimization of the two components in the dynamic loss through meta-learning, allowing the classifier to effectively adapt to clean and balanced test data. Extensive experiments conducted on multiple real-world and synthetic datasets with various types of data biases, including CIFAR-10/100, Animal-10N, ImageNet-LT, and Webvision, demonstrate that our method achieves state-of-the-art accuracy. The code for our approach will soon be made publicly available.","PeriodicalId":13426,"journal":{"name":"IEEE Transactions on Pattern Analysis and Machine Intelligence","volume":" ","pages":""},"PeriodicalIF":23.6,"publicationDate":"2022-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43527956","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Event Transformer+. A multi-purpose solution for efficient event data processing 事件转换器+。高效事件数据处理的多功能解决方案
IF 23.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2022-11-22 DOI: 10.48550/arXiv.2211.12222
Alberto Sabater, L. Montesano, A. C. Murillo
Event cameras record sparse illumination changes with high temporal resolution and high dynamic range. Thanks to their sparse recording and low consumption, they are increasingly used in applications such as AR/VR and autonomous driving. Current top-performing methods often ignore specific event-data properties, leading to the development of generic but computationally expensive algorithms, while event-aware methods do not perform as well. We propose Event Transformer+, that improves our seminal work EvT with a refined patch-based event representation and a more robust backbone to achieve more accurate results, while still benefiting from event-data sparsity to increase its efficiency. Additionally, we show how our system can work with different data modalities and propose specific output heads, for event-stream classification (i.e., action recognition) and per-pixel predictions (dense depth estimation). Evaluation results show better performance to the state-of-the-art while requiring minimal computation resources, both on GPU and CPU.
事件摄像机记录具有高时间分辨率和高动态范围的稀疏照明变化。由于其稀疏的记录和低功耗,它们越来越多地用于AR/VR和自动驾驶等应用。当前性能最好的方法往往忽略特定的事件数据属性,导致开发通用但计算成本高昂的算法,而事件感知方法的性能不佳。我们提出了Event Transformer+,它通过改进基于补丁的事件表示和更强大的主干来改进我们的开创性工作EvT,以实现更准确的结果,同时仍然受益于事件数据的稀疏性来提高其效率。此外,我们展示了我们的系统如何处理不同的数据模式,并提出了特定的输出头,用于事件流分类(即动作识别)和每像素预测(密集深度估计)。评估结果显示,在GPU和CPU上都需要最少的计算资源的同时,与最先进的技术相比,性能更好。
{"title":"Event Transformer+. A multi-purpose solution for efficient event data processing","authors":"Alberto Sabater, L. Montesano, A. C. Murillo","doi":"10.48550/arXiv.2211.12222","DOIUrl":"https://doi.org/10.48550/arXiv.2211.12222","url":null,"abstract":"Event cameras record sparse illumination changes with high temporal resolution and high dynamic range. Thanks to their sparse recording and low consumption, they are increasingly used in applications such as AR/VR and autonomous driving. Current top-performing methods often ignore specific event-data properties, leading to the development of generic but computationally expensive algorithms, while event-aware methods do not perform as well. We propose Event Transformer+, that improves our seminal work EvT with a refined patch-based event representation and a more robust backbone to achieve more accurate results, while still benefiting from event-data sparsity to increase its efficiency. Additionally, we show how our system can work with different data modalities and propose specific output heads, for event-stream classification (i.e., action recognition) and per-pixel predictions (dense depth estimation). Evaluation results show better performance to the state-of-the-art while requiring minimal computation resources, both on GPU and CPU.","PeriodicalId":13426,"journal":{"name":"IEEE Transactions on Pattern Analysis and Machine Intelligence","volume":" ","pages":""},"PeriodicalIF":23.6,"publicationDate":"2022-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47352420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Learning from partially labeled data for multi-organ and tumor segmentation 从部分标记数据中学习多器官和肿瘤分割
IF 23.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2022-11-13 DOI: 10.48550/arXiv.2211.06894
Yutong Xie, Jianpeng Zhang, Yong Xia, Chunhua Shen
Medical image benchmarks for the segmentation of organs and tumors suffer from the partially labeling issue due to its intensive cost of labor and expertise. Current mainstream approaches follow the practice of one network solving one task. With this pipeline, not only the performance is limited by the typically small dataset of a single task, but also the computation cost linearly increases with the number of tasks. To address this, we propose a Transformer based dynamic on-demand network (TransDoDNet) that learns to segment organs and tumors on multiple partially labeled datasets. Specifically, TransDoDNet has a hybrid backbone that is composed of the convolutional neural network and Transformer. A dynamic head enables the network to accomplish multiple segmentation tasks flexibly. Unlike existing approaches that fix kernels after training, the kernels in the dynamic head are generated adaptively by the Transformer, which employs the self-attention mechanism to model long-range organ-wise dependencies and decodes the organ embedding that can represent each organ. We create a large-scale partially labeled Multi-Organ and Tumor Segmentation benchmark, termed MOTS, and demonstrate the superior performance of our TransDoDNet over other competitors on seven organ and tumor segmentation tasks. This study also provides a general 3D medical image segmentation model, which has been pre-trained on the large-scale MOTS benchmark and has demonstrated advanced performance over current predominant self-supervised learning methods. Code and data are available at https://github.com/jianpengz/DoDNet.
用于器官和肿瘤分割的医学图像基准由于其密集的人工和专业成本而受到部分标记问题的困扰。目前的主流方法遵循一个网络解决一个任务的做法。使用这种管道,不仅性能受到单个任务通常较小的数据集的限制,而且计算成本随着任务数量的增加而线性增加。为了解决这个问题,我们提出了一个基于Transformer的动态按需网络(TransDoDNet),该网络学习在多个部分标记的数据集上分割器官和肿瘤。具体来说,TransDoDNet有一个由卷积神经网络和Transformer组成的混合主干。动态头使网络能够灵活地完成多个分段任务。与现有的训练后固定核的方法不同,动态头部的核是由Transformer自适应生成的,它采用自关注机制来建模长期的器官依赖关系,并解码可以代表每个器官的器官嵌入。我们创建了一个大规模的部分标记的多器官和肿瘤分割基准,称为MOTS,并展示了我们的TransDoDNet在七个器官和肿瘤分割任务上优于其他竞争对手的性能。本研究还提供了一个通用的三维医学图像分割模型,该模型已经在大规模MOTS基准上进行了预训练,并且比目前主流的自监督学习方法表现出了先进的性能。代码和数据可在https://github.com/jianpengz/DoDNet上获得。
{"title":"Learning from partially labeled data for multi-organ and tumor segmentation","authors":"Yutong Xie, Jianpeng Zhang, Yong Xia, Chunhua Shen","doi":"10.48550/arXiv.2211.06894","DOIUrl":"https://doi.org/10.48550/arXiv.2211.06894","url":null,"abstract":"Medical image benchmarks for the segmentation of organs and tumors suffer from the partially labeling issue due to its intensive cost of labor and expertise. Current mainstream approaches follow the practice of one network solving one task. With this pipeline, not only the performance is limited by the typically small dataset of a single task, but also the computation cost linearly increases with the number of tasks. To address this, we propose a Transformer based dynamic on-demand network (TransDoDNet) that learns to segment organs and tumors on multiple partially labeled datasets. Specifically, TransDoDNet has a hybrid backbone that is composed of the convolutional neural network and Transformer. A dynamic head enables the network to accomplish multiple segmentation tasks flexibly. Unlike existing approaches that fix kernels after training, the kernels in the dynamic head are generated adaptively by the Transformer, which employs the self-attention mechanism to model long-range organ-wise dependencies and decodes the organ embedding that can represent each organ. We create a large-scale partially labeled Multi-Organ and Tumor Segmentation benchmark, termed MOTS, and demonstrate the superior performance of our TransDoDNet over other competitors on seven organ and tumor segmentation tasks. This study also provides a general 3D medical image segmentation model, which has been pre-trained on the large-scale MOTS benchmark and has demonstrated advanced performance over current predominant self-supervised learning methods. Code and data are available at https://github.com/jianpengz/DoDNet.","PeriodicalId":13426,"journal":{"name":"IEEE Transactions on Pattern Analysis and Machine Intelligence","volume":" ","pages":""},"PeriodicalIF":23.6,"publicationDate":"2022-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46781892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
SC-DepthV3: Robust Self-supervised Monocular Depth Estimation for Dynamic Scenes 动态场景的鲁棒自监督单目深度估计
IF 23.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2022-11-07 DOI: 10.48550/arXiv.2211.03660
Libo Sun, Jiawang Bian, Huangying Zhan, Wei Yin, I. Reid, Chunhua Shen
Self-supervised monocular depth estimation has shown impressive results in static scenes. It relies on the multi-view consistency assumption for training networks, however, that is violated in dynamic object regions and occlusions. Consequently, existing methods show poor accuracy in dynamic scenes, and the estimated depth map is blurred at object boundaries because they are usually occluded in other training views. In this paper, we propose SC-DepthV3 for addressing the challenges. Specifically, we introduce an external pretrained monocular depth estimation model for generating single-image depth prior, namely pseudo-depth, based on which we propose novel losses to boost self-supervised training. As a result, our model can predict sharp and accurate depth maps, even when training from monocular videos of highly dynamic scenes. We demonstrate the significantly superior performance of our method over previous methods on six challenging datasets, and we provide detailed ablation studies for the proposed terms. Source code and data have been released at https://github.com/JiawangBian/sc_depth_pl.
自监督单目深度估计在静态场景中显示出令人印象深刻的结果。它依赖于训练网络的多视图一致性假设,然而,在动态目标区域和遮挡中违背了这一假设。因此,现有的方法在动态场景中显示出较差的准确性,并且估计的深度图在物体边界处模糊不清,因为它们通常被其他训练视图遮挡。在本文中,我们提出SC-DepthV3来解决这些挑战。具体来说,我们引入了一种外部预训练的单眼深度估计模型,用于生成单图像深度先验,即伪深度,并在此基础上提出了新的损失来增强自监督训练。因此,我们的模型可以预测出清晰而准确的深度图,即使是在训练高度动态场景的单目视频时也是如此。我们在六个具有挑战性的数据集上证明了我们的方法比以前的方法具有显著的优越性能,并且我们为提议的术语提供了详细的消融研究。源代码和数据已在https://github.com/JiawangBian/sc_depth_pl上发布。
{"title":"SC-DepthV3: Robust Self-supervised Monocular Depth Estimation for Dynamic Scenes","authors":"Libo Sun, Jiawang Bian, Huangying Zhan, Wei Yin, I. Reid, Chunhua Shen","doi":"10.48550/arXiv.2211.03660","DOIUrl":"https://doi.org/10.48550/arXiv.2211.03660","url":null,"abstract":"Self-supervised monocular depth estimation has shown impressive results in static scenes. It relies on the multi-view consistency assumption for training networks, however, that is violated in dynamic object regions and occlusions. Consequently, existing methods show poor accuracy in dynamic scenes, and the estimated depth map is blurred at object boundaries because they are usually occluded in other training views. In this paper, we propose SC-DepthV3 for addressing the challenges. Specifically, we introduce an external pretrained monocular depth estimation model for generating single-image depth prior, namely pseudo-depth, based on which we propose novel losses to boost self-supervised training. As a result, our model can predict sharp and accurate depth maps, even when training from monocular videos of highly dynamic scenes. We demonstrate the significantly superior performance of our method over previous methods on six challenging datasets, and we provide detailed ablation studies for the proposed terms. Source code and data have been released at https://github.com/JiawangBian/sc_depth_pl.","PeriodicalId":13426,"journal":{"name":"IEEE Transactions on Pattern Analysis and Machine Intelligence","volume":" ","pages":""},"PeriodicalIF":23.6,"publicationDate":"2022-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42478229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Simple Primitives with Feasibility- and Contextuality-Dependence for Open-World Compositional Zero-shot Learning 具有可行性和情境依赖性的开放世界组合零射击学习简单原语
IF 23.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2022-11-05 DOI: 10.48550/arXiv.2211.02895
Zhe Liu, Yun Li, L. Yao, Xiaojun Chang, Wei Fang, Xiaojun Wu, Yi Yang
The task of Open-World Compositional Zero-Shot Learning (OW-CZSL) is to recognize novel state-object compositions in images from all possible compositions, where the novel compositions are absent during the training stage. The performance of conventional methods degrades significantly due to the large cardinality of possible compositions. Some recent works consider simple primitives (i.e., states and objects) independent and separately predict them to reduce cardinality. However, it ignores the heavy dependence between states, objects, and compositions. In this paper, we model the dependence via feasibility and contextuality. Feasibility-dependence refers to the unequal feasibility of compositions, e.g., hairy is more feasible with cat than with building in the real world. Contextuality-dependence represents the contextual variance in images, e.g., cat shows diverse appearances when it is dry or wet. We design Semantic Attention (SA) to capture the feasibility semantics to alleviate impossible predictions, driven by the visual similarity between simple primitives. We also propose a generative Knowledge Disentanglement (KD) to disentangle images into unbiased representations, easing the contextual bias. Moreover, we complement the independent compositional probability model with the learned feasibility and contextuality compatibly. In the experiments, we demonstrate our superior or competitive performance, SA-and-kD-guided Simple Primitives (SAD-SP), on three benchmark datasets.
Open-World composition Zero-Shot Learning (low - czsl)的任务是从所有可能的组合中识别图像中的新状态-对象组合,其中新组合在训练阶段不存在。由于可能组合的基数很大,传统方法的性能显著降低。最近的一些研究认为简单的原语(即状态和对象)是独立的,并分别预测它们以减少基数。然而,它忽略了状态、对象和组合之间的严重依赖关系。在本文中,我们通过可行性和情境性来建模依赖关系。可行性依赖是指组合的不平等可行性,例如,在现实世界中,毛茸茸的猫比建筑更可行。情境依赖性表示图像中的情境差异,例如,猫在干燥或潮湿时表现出不同的外观。我们设计了语义注意(Semantic Attention, SA)来捕获可行性语义,以减轻由简单原语之间的视觉相似性驱动的不可能预测。我们还提出了一种生成式知识解纠缠(KD)来将图像解纠缠为无偏表示,从而缓解语境偏见。此外,我们将独立组合概率模型与学习到的可行性和情境性相结合。在实验中,我们在三个基准数据集上展示了我们的优越或竞争性能,sa和kd引导的简单原语(SAD-SP)。
{"title":"Simple Primitives with Feasibility- and Contextuality-Dependence for Open-World Compositional Zero-shot Learning","authors":"Zhe Liu, Yun Li, L. Yao, Xiaojun Chang, Wei Fang, Xiaojun Wu, Yi Yang","doi":"10.48550/arXiv.2211.02895","DOIUrl":"https://doi.org/10.48550/arXiv.2211.02895","url":null,"abstract":"The task of Open-World Compositional Zero-Shot Learning (OW-CZSL) is to recognize novel state-object compositions in images from all possible compositions, where the novel compositions are absent during the training stage. The performance of conventional methods degrades significantly due to the large cardinality of possible compositions. Some recent works consider simple primitives (i.e., states and objects) independent and separately predict them to reduce cardinality. However, it ignores the heavy dependence between states, objects, and compositions. In this paper, we model the dependence via feasibility and contextuality. Feasibility-dependence refers to the unequal feasibility of compositions, e.g., hairy is more feasible with cat than with building in the real world. Contextuality-dependence represents the contextual variance in images, e.g., cat shows diverse appearances when it is dry or wet. We design Semantic Attention (SA) to capture the feasibility semantics to alleviate impossible predictions, driven by the visual similarity between simple primitives. We also propose a generative Knowledge Disentanglement (KD) to disentangle images into unbiased representations, easing the contextual bias. Moreover, we complement the independent compositional probability model with the learned feasibility and contextuality compatibly. In the experiments, we demonstrate our superior or competitive performance, SA-and-kD-guided Simple Primitives (SAD-SP), on three benchmark datasets.","PeriodicalId":13426,"journal":{"name":"IEEE Transactions on Pattern Analysis and Machine Intelligence","volume":" ","pages":""},"PeriodicalIF":23.6,"publicationDate":"2022-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48671147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Robust Reflection Removal with Flash-only Cues in the Wild 健壮的反射去除与仅闪光提示在野外
IF 23.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2022-11-05 DOI: 10.48550/arXiv.2211.02914
Chenyang Lei, Xu-dong Jiang, Qifeng Chen
We propose a simple yet effective reflection-free cue for robust reflection removal from a pair of flash and ambient (no-flash) images. The reflection-free cue exploits a flash-only image obtained by subtracting the ambient image from the corresponding flash image in raw data space. The flash-only image is equivalent to an image taken in a dark environment with only a flash on. This flash-only image is visually reflection-free and thus can provide robust cues to infer the reflection in the ambient image. Since the flash-only image usually has artifacts, we further propose a dedicated model that not only utilizes the reflection-free cue but also avoids introducing artifacts, which helps accurately estimate reflection and transmission. Our experiments on real-world images with various types of reflection demonstrate the effectiveness of our model with reflection-free flash-only cues: our model outperforms state-of-the-art reflection removal approaches by more than 5.23 dB in PSNR. We extend our approach to handheld photography to address the misalignment between the flash and no-flash pair. With misaligned training data and the alignment module, our aligned model outperforms our previous version by more than 3.19 dB in PSNR on a misaligned dataset. We also study using linear RGB images as training data. Our source code and dataset are publicly available at https://github.com/ChenyangLEI/flash-reflection-removal.
我们提出了一种简单而有效的无反射提示,用于从一对闪光和环境(无闪光)图像中健壮地去除反射。该无反射线索利用通过在原始数据空间中从相应的闪光图像中减去环境图像而获得的仅闪光图像。只使用闪光灯的图像相当于在黑暗环境中只打开闪光灯拍摄的图像。这种只有闪光的图像在视觉上是无反射的,因此可以提供强大的线索来推断环境图像中的反射。由于纯闪图像通常存在伪影,我们进一步提出了一种专用模型,该模型既利用了无反射线索,又避免了引入伪影,有助于准确估计反射和透射。我们在具有各种反射类型的真实图像上的实验证明了我们的模型在无反射仅闪烁线索下的有效性:我们的模型在PSNR上优于最先进的反射去除方法超过5.23 dB。我们将我们的方法扩展到手持摄影,以解决闪光灯和无闪光灯对之间的不校准。使用未对齐的训练数据和对齐模块,我们的对齐模型在未对齐数据集上的PSNR比之前的版本高出3.19 dB以上。我们还研究了使用线性RGB图像作为训练数据。我们的源代码和数据集可以在https://github.com/ChenyangLEI/flash-reflection-removal上公开获取。
{"title":"Robust Reflection Removal with Flash-only Cues in the Wild","authors":"Chenyang Lei, Xu-dong Jiang, Qifeng Chen","doi":"10.48550/arXiv.2211.02914","DOIUrl":"https://doi.org/10.48550/arXiv.2211.02914","url":null,"abstract":"We propose a simple yet effective reflection-free cue for robust reflection removal from a pair of flash and ambient (no-flash) images. The reflection-free cue exploits a flash-only image obtained by subtracting the ambient image from the corresponding flash image in raw data space. The flash-only image is equivalent to an image taken in a dark environment with only a flash on. This flash-only image is visually reflection-free and thus can provide robust cues to infer the reflection in the ambient image. Since the flash-only image usually has artifacts, we further propose a dedicated model that not only utilizes the reflection-free cue but also avoids introducing artifacts, which helps accurately estimate reflection and transmission. Our experiments on real-world images with various types of reflection demonstrate the effectiveness of our model with reflection-free flash-only cues: our model outperforms state-of-the-art reflection removal approaches by more than 5.23 dB in PSNR. We extend our approach to handheld photography to address the misalignment between the flash and no-flash pair. With misaligned training data and the alignment module, our aligned model outperforms our previous version by more than 3.19 dB in PSNR on a misaligned dataset. We also study using linear RGB images as training data. Our source code and dataset are publicly available at https://github.com/ChenyangLEI/flash-reflection-removal.","PeriodicalId":13426,"journal":{"name":"IEEE Transactions on Pattern Analysis and Machine Intelligence","volume":" ","pages":""},"PeriodicalIF":23.6,"publicationDate":"2022-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46986432","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient Spatially Sparse Inference for Conditional GANs and Diffusion Models 条件GANs和扩散模型的有效空间稀疏推理
IF 23.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2022-11-03 DOI: 10.48550/arXiv.2211.02048
Muyang Li, Ji Lin, Chenlin Meng, Stefano Ermon, Song Han, Jun-Yan Zhu
During image editing, existing deep generative models tend to re-synthesize the entire output from scratch, including the unedited regions. This leads to a significant waste of computation, especially for minor editing operations. In this work, we present Spatially Sparse Inference (SSI), a general-purpose technique that selectively performs computation for edited regions and accelerates various generative models, including both conditional GANs and diffusion models. Our key observation is that users prone to gradually edit the input image. This motivates us to cache and reuse the feature maps of the original image. Given an edited image, we sparsely apply the convolutional filters to the edited regions while reusing the cached features for the unedited areas. Based on our algorithm, we further propose Sparse Incremental Generative Engine (SIGE) to convert the computation reduction to latency reduction on off-the-shelf hardware. With about 1%-area edits, SIGE accelerates DDPM by 3.0× on NVIDIA RTX 3090 and 4.6× on Apple M1 Pro GPU, Stable Diffusion by 7.2× on 3090, and GauGAN by 5.6× on 3090 and 5.2× on M1 Pro GPU. layers and apply it to Stable Diffusion. Additionally, we offer support for Apple M1 Pro GPU and include more results to substantiate the efficacy of our method.
在图像编辑过程中,现有的深度生成模型倾向于从头开始重新合成整个输出,包括未编辑的区域。这导致了计算的巨大浪费,尤其是对于较小的编辑操作。在这项工作中,我们提出了空间稀疏推理(SSI),这是一种通用技术,可以选择性地对编辑区域执行计算,并加速各种生成模型,包括条件GANs和扩散模型。我们的主要观察结果是,用户倾向于逐渐编辑输入图像。这促使我们缓存和重用原始图像的特征图。给定一个编辑过的图像,我们稀疏地将卷积滤波器应用于编辑过的区域,同时为未编辑的区域重用缓存的特征。基于我们的算法,我们进一步提出了稀疏增量生成引擎(SIGE),以将现有硬件上的计算减少转换为延迟减少。通过约1%的面积编辑,SIGE在NVIDIA RTX 3090和Apple M1 Pro GPU上分别将DDPM加速3.0倍和4.6倍,在3090和GauGAN上分别将Stable Diffusion加速7.2倍和5.2倍。层并将其应用于稳定扩散。此外,我们提供对Apple M1 Pro GPU的支持,并提供更多结果来证实我们方法的有效性。
{"title":"Efficient Spatially Sparse Inference for Conditional GANs and Diffusion Models","authors":"Muyang Li, Ji Lin, Chenlin Meng, Stefano Ermon, Song Han, Jun-Yan Zhu","doi":"10.48550/arXiv.2211.02048","DOIUrl":"https://doi.org/10.48550/arXiv.2211.02048","url":null,"abstract":"During image editing, existing deep generative models tend to re-synthesize the entire output from scratch, including the unedited regions. This leads to a significant waste of computation, especially for minor editing operations. In this work, we present Spatially Sparse Inference (SSI), a general-purpose technique that selectively performs computation for edited regions and accelerates various generative models, including both conditional GANs and diffusion models. Our key observation is that users prone to gradually edit the input image. This motivates us to cache and reuse the feature maps of the original image. Given an edited image, we sparsely apply the convolutional filters to the edited regions while reusing the cached features for the unedited areas. Based on our algorithm, we further propose Sparse Incremental Generative Engine (SIGE) to convert the computation reduction to latency reduction on off-the-shelf hardware. With about 1%-area edits, SIGE accelerates DDPM by 3.0× on NVIDIA RTX 3090 and 4.6× on Apple M1 Pro GPU, Stable Diffusion by 7.2× on 3090, and GauGAN by 5.6× on 3090 and 5.2× on M1 Pro GPU. layers and apply it to Stable Diffusion. Additionally, we offer support for Apple M1 Pro GPU and include more results to substantiate the efficacy of our method.","PeriodicalId":13426,"journal":{"name":"IEEE Transactions on Pattern Analysis and Machine Intelligence","volume":" ","pages":""},"PeriodicalIF":23.6,"publicationDate":"2022-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47463693","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Unsupervised Deraining: Where Asymmetric Contrastive Learning Meets Self-similarity 无监督脱轨:非对称对比学习与自相似
IF 23.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2022-11-02 DOI: 10.48550/arXiv.2211.00837
Yi Chang, Yun Guo, Yuntong Ye, C. Yu, Lin Zhu, Xile Zhao, Luxin Yan, Yonghong Tian
Most existing learning-based deraining methods are supervisedly trained on synthetic rainy-clean pairs. The domain gap between the synthetic and real rain makes them less generalized to complex real rainy scenes. Moreover, the existing methods mainly utilize the property of the image or rain layers independently, while few of them have considered their mutually exclusive relationship. To solve above dilemma, we explore the intrinsic intra-similarity within each layer and inter-exclusiveness between two layers and propose an unsupervised non-local contrastive learning (NLCL) deraining method. The non-local self-similarity image patches as the positives are tightly pulled together and rain patches as the negatives are remarkably pushed away, and vice versa. On one hand, the intrinsic self-similarity knowledge within positive/negative samples of each layer benefits us to discover more compact representation; on the other hand, the mutually exclusive property between the two layers enriches the discriminative decomposition. Thus, the internal self-similarity within each layer (similarity) and the external exclusive relationship of the two layers (dissimilarity) serving as a generic image prior jointly facilitate us to unsupervisedly differentiate the rain from clean image. We further discover that the intrinsic dimension of the non-local image patches is generally higher than that of the rain patches. This insight motivates us to design an asymmetric contrastive loss that precisely models the compactness discrepancy of the two layers, thereby improving the discriminative decomposition. In addition, recognizing the limited quality of existing real rain datasets, which are often small-scale or obtained from the internet, we collect a large-scale real dataset under various rainy weathers that contains high-resolution rainy images. Extensive experiments conducted on different real rainy datasets demonstrate that the proposed method obtains state-of-the-art performance in real deraining. Both the code and the newly collected datasets will be available at https://owuchangyuo.github.io.
大多数现有的基于学习的脱轨方法都是在合成的rain -clean对上进行监督训练的。合成雨与真实雨之间的领域差距使得它们难以推广到复杂的真实雨场景。此外,现有的方法主要是独立利用图像或雨层的属性,很少考虑到它们之间的互斥关系。为了解决上述困境,我们探索了每层内部的内在相似性和两层之间的相互排他性,并提出了一种无监督非局部对比学习(NLCL)脱训练方法。非局部自相似图像斑块作为正极被紧紧地拉在一起,而雨斑块作为负极被明显地推开,反之亦然。一方面,每层正/负样本内的固有自相似知识有利于我们发现更紧凑的表示;另一方面,两层之间的互斥性质丰富了判别分解。因此,每层内部的自相似性(相似性)和两层外部的排他性关系(不相似性)共同作为通用图像先验,有助于我们无监督地区分雨和干净图像。我们进一步发现,非局部图像斑块的固有维数普遍高于雨斑块。这种见解促使我们设计一个不对称的对比损失,精确地模拟两层的紧致度差异,从而改进判别分解。此外,认识到现有真实降雨数据集的质量有限,这些数据集通常是小规模的或从互联网上获得的,我们收集了各种降雨天气下的大规模真实数据集,其中包含高分辨率降雨图像。在不同的真实降雨数据集上进行的大量实验表明,该方法在真实训练中获得了最先进的性能。代码和新收集的数据集都可以在https://owuchangyuo.github.io上获得。
{"title":"Unsupervised Deraining: Where Asymmetric Contrastive Learning Meets Self-similarity","authors":"Yi Chang, Yun Guo, Yuntong Ye, C. Yu, Lin Zhu, Xile Zhao, Luxin Yan, Yonghong Tian","doi":"10.48550/arXiv.2211.00837","DOIUrl":"https://doi.org/10.48550/arXiv.2211.00837","url":null,"abstract":"Most existing learning-based deraining methods are supervisedly trained on synthetic rainy-clean pairs. The domain gap between the synthetic and real rain makes them less generalized to complex real rainy scenes. Moreover, the existing methods mainly utilize the property of the image or rain layers independently, while few of them have considered their mutually exclusive relationship. To solve above dilemma, we explore the intrinsic intra-similarity within each layer and inter-exclusiveness between two layers and propose an unsupervised non-local contrastive learning (NLCL) deraining method. The non-local self-similarity image patches as the positives are tightly pulled together and rain patches as the negatives are remarkably pushed away, and vice versa. On one hand, the intrinsic self-similarity knowledge within positive/negative samples of each layer benefits us to discover more compact representation; on the other hand, the mutually exclusive property between the two layers enriches the discriminative decomposition. Thus, the internal self-similarity within each layer (similarity) and the external exclusive relationship of the two layers (dissimilarity) serving as a generic image prior jointly facilitate us to unsupervisedly differentiate the rain from clean image. We further discover that the intrinsic dimension of the non-local image patches is generally higher than that of the rain patches. This insight motivates us to design an asymmetric contrastive loss that precisely models the compactness discrepancy of the two layers, thereby improving the discriminative decomposition. In addition, recognizing the limited quality of existing real rain datasets, which are often small-scale or obtained from the internet, we collect a large-scale real dataset under various rainy weathers that contains high-resolution rainy images. Extensive experiments conducted on different real rainy datasets demonstrate that the proposed method obtains state-of-the-art performance in real deraining. Both the code and the newly collected datasets will be available at https://owuchangyuo.github.io.","PeriodicalId":13426,"journal":{"name":"IEEE Transactions on Pattern Analysis and Machine Intelligence","volume":" ","pages":""},"PeriodicalIF":23.6,"publicationDate":"2022-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45838064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Transactions on Pattern Analysis and Machine Intelligence
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1