首页 > 最新文献

IEEE Transactions on Pattern Analysis and Machine Intelligence最新文献

英文 中文
Learning Gait Representation from Massive Unlabelled Walking Videos: A Benchmark 从大量未标记的步行视频中学习步态表示:一个基准
IF 23.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2022-06-28 DOI: 10.48550/arXiv.2206.13964
Chao Fan, Saihui Hou, Jilong Wang, Yongzhen Huang, Shiqi Yu
Gait depicts individuals' unique and distinguishing walking patterns and has become one of the most promising biometric features for human identification. As a fine-grained recognition task, gait recognition is easily affected by many factors and usually requires a large amount of completely annotated data that is costly and insatiable. This paper proposes a large-scale self-supervised benchmark for gait recognition with contrastive learning, aiming to learn the general gait representation from massive unlabelled walking videos for practical applications via offering informative walking priors and diverse real-world variations. Specifically, we collect a large-scale unlabelled gait dataset GaitLU-1M consisting of 1.02M walking sequences and propose a conceptually simple yet empirically powerful baseline model GaitSSB. Experimentally, we evaluate the pre-trained model on four widely-used gait benchmarks, CASIA-B, OU-MVLP, GREW and Gait3D with or without transfer learning. The unsupervised results are comparable to or even better than the early model-based and GEI-based methods. After transfer learning, GaitSSB outperforms existing methods by a large margin in most cases, and also showcases the superior generalization capacity. Further experiments indicate that the pre-training can save about 50% and 80% annotation costs of GREW and Gait3D. Theoretically, we discuss the critical issues for gait-specific contrastive framework and present some insights for further study. As far as we know, GaitLU-1M is the first large-scale unlabelled gait dataset, and GaitSSB is the first method that achieves remarkable unsupervised results on the aforementioned benchmarks. The source code of GaitSSB and anonymous data of GaitLU-1M is available at https://github.com/ShiqiYu/OpenGait.
步态描绘了个体独特的行走模式,已成为人类识别最有前景的生物特征之一。步态识别作为一项细粒度的识别任务,容易受到多种因素的影响,通常需要大量完全注释的数据,成本高昂,难以满足。本文提出了一种用于对比学习步态识别的大规模自监督基准,旨在通过提供信息丰富的行走先验和不同的真实世界变化,从大量未标记的行走视频中学习通用步态表示,以供实际应用。具体来说,我们收集了一个由1.02M个行走序列组成的大规模未标记步态数据集GaitLU-1M,并提出了一个概念简单但经验强大的基线模型GaitSB。在实验上,我们在四个广泛使用的步态基准上评估了预训练模型,即CASIA-B、OU-MVLP、GREW和Gait3D,无论是否进行迁移学习。无监督的结果与早期基于模型和基于GEI的方法相当,甚至更好。在迁移学习之后,GaitSB在大多数情况下都大大优于现有的方法,并且表现出优越的泛化能力。进一步的实验表明,预训练可以节省GREW和Gait3D约50%和80%的注释成本。从理论上讲,我们讨论了步态特定对比框架的关键问题,并为进一步研究提供了一些见解。据我们所知,GaitLU-1M是第一个大规模的未标记步态数据集,而GaitSB是第一个在上述基准上获得显著无监督结果的方法。GaitSB的源代码和GaitLU-1M的匿名数据可在https://github.com/ShiqiYu/OpenGait.
{"title":"Learning Gait Representation from Massive Unlabelled Walking Videos: A Benchmark","authors":"Chao Fan, Saihui Hou, Jilong Wang, Yongzhen Huang, Shiqi Yu","doi":"10.48550/arXiv.2206.13964","DOIUrl":"https://doi.org/10.48550/arXiv.2206.13964","url":null,"abstract":"Gait depicts individuals' unique and distinguishing walking patterns and has become one of the most promising biometric features for human identification. As a fine-grained recognition task, gait recognition is easily affected by many factors and usually requires a large amount of completely annotated data that is costly and insatiable. This paper proposes a large-scale self-supervised benchmark for gait recognition with contrastive learning, aiming to learn the general gait representation from massive unlabelled walking videos for practical applications via offering informative walking priors and diverse real-world variations. Specifically, we collect a large-scale unlabelled gait dataset GaitLU-1M consisting of 1.02M walking sequences and propose a conceptually simple yet empirically powerful baseline model GaitSSB. Experimentally, we evaluate the pre-trained model on four widely-used gait benchmarks, CASIA-B, OU-MVLP, GREW and Gait3D with or without transfer learning. The unsupervised results are comparable to or even better than the early model-based and GEI-based methods. After transfer learning, GaitSSB outperforms existing methods by a large margin in most cases, and also showcases the superior generalization capacity. Further experiments indicate that the pre-training can save about 50% and 80% annotation costs of GREW and Gait3D. Theoretically, we discuss the critical issues for gait-specific contrastive framework and present some insights for further study. As far as we know, GaitLU-1M is the first large-scale unlabelled gait dataset, and GaitSSB is the first method that achieves remarkable unsupervised results on the aforementioned benchmarks. The source code of GaitSSB and anonymous data of GaitLU-1M is available at https://github.com/ShiqiYu/OpenGait.","PeriodicalId":13426,"journal":{"name":"IEEE Transactions on Pattern Analysis and Machine Intelligence","volume":" ","pages":""},"PeriodicalIF":23.6,"publicationDate":"2022-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46310622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
StudioGAN: A Taxonomy and Benchmark of GANs for Image Synthesis StudioGAN:用于图像合成的gan的分类和基准
IF 23.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2022-06-19 DOI: 10.48550/arXiv.2206.09479
Minguk Kang, Joonghyuk Shin, Jaesik Park
Generative Adversarial Network (GAN) is one of the state-of-the-art generative models for realistic image synthesis. While training and evaluating GAN becomes increasingly important, the current GAN research ecosystem does not provide reliable benchmarks for which the evaluation is conducted consistently and fairly. Furthermore, because there are few validated GAN implementations, researchers devote considerable time to reproducing baselines. We study the taxonomy of GAN approaches and present a new open-source library named StudioGAN. StudioGAN supports 7 GAN architectures, 9 conditioning methods, 4 adversarial losses, 12 regularization modules, 3 differentiable augmentations, 7 evaluation metrics, and 5 evaluation backbones. With our training and evaluation protocol, we present a large-scale benchmark using various datasets (CIFAR10, ImageNet, AFHQv2, FFHQ, and Baby/Papa/Granpa-ImageNet) and 3 different evaluation backbones (InceptionV3, SwAV, and Swin Transformer). Unlike other benchmarks used in the GAN community, we train representative GANs, including BigGAN and StyleGAN series in a unified training pipeline and quantify generation performance with 7 evaluation metrics. The benchmark evaluates other cutting-edge generative models (e.g., StyleGAN-XL, ADM, MaskGIT, and RQ-Transformer). StudioGAN provides GAN implementations, training, and evaluation scripts with the pre-trained weights. StudioGAN is available at  https://github.com/POSTECH-CVLab/PyTorch-StudioGAN.
生成对抗性网络(GAN)是最先进的真实感图像合成生成模型之一。虽然训练和评估GAN变得越来越重要,但当前的GAN研究生态系统并没有提供可靠的基准,无法持续、公平地进行评估。此外,由于很少有经过验证的GAN实现,研究人员投入了大量时间来复制基线。我们研究了GAN方法的分类,并提出了一个新的开源库,名为StudioGAN。StudioGAN支持7种GAN架构、9种条件化方法、4种对抗性损失、12个正则化模块、3个可微扩充、7个评估度量和5个评估骨干。通过我们的训练和评估协议,我们使用各种数据集(CIFAR10、ImageNet、AFHQv2、FFHQ和Baby/Papa/Granpa ImageNet)和3个不同的评估骨干(InceptionV3、SwAV和Swin Transformer)提出了一个大规模的基准。与GAN社区中使用的其他基准不同,我们在统一的训练管道中训练具有代表性的GAN,包括BigGAN和StyleGAN系列,并用7个评估指标量化生成性能。该基准评估其他尖端生成模型(例如StyleGAN XL、ADM、MaskGIT和RQ Transformer)。StudioGAN为GAN的实现、训练和评估脚本提供了预先训练的权重。StudioGAN可在https://github.com/POSTECH-CVLab/PyTorch-StudioGAN.
{"title":"StudioGAN: A Taxonomy and Benchmark of GANs for Image Synthesis","authors":"Minguk Kang, Joonghyuk Shin, Jaesik Park","doi":"10.48550/arXiv.2206.09479","DOIUrl":"https://doi.org/10.48550/arXiv.2206.09479","url":null,"abstract":"Generative Adversarial Network (GAN) is one of the state-of-the-art generative models for realistic image synthesis. While training and evaluating GAN becomes increasingly important, the current GAN research ecosystem does not provide reliable benchmarks for which the evaluation is conducted consistently and fairly. Furthermore, because there are few validated GAN implementations, researchers devote considerable time to reproducing baselines. We study the taxonomy of GAN approaches and present a new open-source library named StudioGAN. StudioGAN supports 7 GAN architectures, 9 conditioning methods, 4 adversarial losses, 12 regularization modules, 3 differentiable augmentations, 7 evaluation metrics, and 5 evaluation backbones. With our training and evaluation protocol, we present a large-scale benchmark using various datasets (CIFAR10, ImageNet, AFHQv2, FFHQ, and Baby/Papa/Granpa-ImageNet) and 3 different evaluation backbones (InceptionV3, SwAV, and Swin Transformer). Unlike other benchmarks used in the GAN community, we train representative GANs, including BigGAN and StyleGAN series in a unified training pipeline and quantify generation performance with 7 evaluation metrics. The benchmark evaluates other cutting-edge generative models (e.g., StyleGAN-XL, ADM, MaskGIT, and RQ-Transformer). StudioGAN provides GAN implementations, training, and evaluation scripts with the pre-trained weights. StudioGAN is available at  https://github.com/POSTECH-CVLab/PyTorch-StudioGAN.","PeriodicalId":13426,"journal":{"name":"IEEE Transactions on Pattern Analysis and Machine Intelligence","volume":" ","pages":""},"PeriodicalIF":23.6,"publicationDate":"2022-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44616657","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
DPCN++: Differentiable Phase Correlation Network for Versatile Pose Registration DPCN++:用于通用姿态配准的可微分相位相关网络
IF 23.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2022-06-12 DOI: 10.48550/arXiv.2206.05707
Zexi Chen, Yiyi Liao, Haozhe Du, Haodong Zhang, Xuecheng Xu, Haojian Lu, R. Xiong, Yue Wang
Pose registration is critical in vision and robotics. This paper focuses on the challenging task of initialization-free pose registration up to 7DoF for homogeneous and heterogeneous measurements. While recent learning-based methods show promise using differentiable solvers, they either rely on heuristically defined correspondences or require initialization. Phase correlation seeks solutions in the spectral domain and is correspondence-free and initialization-free. Following this, we propose a differentiable solver and combine it with simple feature extraction networks, namely DPCN++. It can perform registration for homo/hetero inputs and generalizes well on unseen objects. Specifically, the feature extraction networks first learn dense feature grids from a pair of homogeneous/heterogeneous measurements. These feature grids are then transformed into a translation and scale invariant spectrum representation based on Fourier transform and spherical radial aggregation, decoupling translation and scale from rotation. Next, the rotation, scale, and translation are independently and efficiently estimated in the spectrum step-by-step. The entire pipeline is differentiable and trained end-to-end. We evaluate DCPN++ on a wide range of tasks taking different input modalities, including 2D bird's-eye view images, 3D object and scene measurements, and medical images. Experimental results demonstrate that DCPN++ outperforms both classical and learning-based baselines, especially on partially observed and heterogeneous measurements.
姿势配准在视觉和机器人技术中至关重要。本文的重点是具有挑战性的任务,即针对同质和异质测量,无初始化姿态配准高达7DoF。虽然最近的基于学习的方法显示出使用可微分求解器的前景,但它们要么依赖于启发式定义的对应关系,要么需要初始化。相位相关在谱域中寻找解,并且是无对应和无初始化的。在此之后,我们提出了一种可微求解器,并将其与简单的特征提取网络相结合,即DPCN++。它可以对同源/异源输入进行配准,并对看不见的对象进行良好的泛化。具体而言,特征提取网络首先从一对同质/异质测量中学习密集特征网格。然后,基于傅立叶变换和球面径向聚合,将这些特征网格转换为平移和尺度不变的频谱表示,将平移和尺度与旋转解耦。接下来,在频谱中逐步独立有效地估计旋转、缩放和平移。整个管道是可微分的,并且是端到端训练的。我们在采用不同输入模式的广泛任务中评估DCPN++,包括2D鸟瞰图、3D对象和场景测量以及医学图像。实验结果表明,DCPN++的性能优于经典基线和基于学习的基线,尤其是在部分观测和异构测量方面。
{"title":"DPCN++: Differentiable Phase Correlation Network for Versatile Pose Registration","authors":"Zexi Chen, Yiyi Liao, Haozhe Du, Haodong Zhang, Xuecheng Xu, Haojian Lu, R. Xiong, Yue Wang","doi":"10.48550/arXiv.2206.05707","DOIUrl":"https://doi.org/10.48550/arXiv.2206.05707","url":null,"abstract":"Pose registration is critical in vision and robotics. This paper focuses on the challenging task of initialization-free pose registration up to 7DoF for homogeneous and heterogeneous measurements. While recent learning-based methods show promise using differentiable solvers, they either rely on heuristically defined correspondences or require initialization. Phase correlation seeks solutions in the spectral domain and is correspondence-free and initialization-free. Following this, we propose a differentiable solver and combine it with simple feature extraction networks, namely DPCN++. It can perform registration for homo/hetero inputs and generalizes well on unseen objects. Specifically, the feature extraction networks first learn dense feature grids from a pair of homogeneous/heterogeneous measurements. These feature grids are then transformed into a translation and scale invariant spectrum representation based on Fourier transform and spherical radial aggregation, decoupling translation and scale from rotation. Next, the rotation, scale, and translation are independently and efficiently estimated in the spectrum step-by-step. The entire pipeline is differentiable and trained end-to-end. We evaluate DCPN++ on a wide range of tasks taking different input modalities, including 2D bird's-eye view images, 3D object and scene measurements, and medical images. Experimental results demonstrate that DCPN++ outperforms both classical and learning-based baselines, especially on partially observed and heterogeneous measurements.","PeriodicalId":13426,"journal":{"name":"IEEE Transactions on Pattern Analysis and Machine Intelligence","volume":" ","pages":""},"PeriodicalIF":23.6,"publicationDate":"2022-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49215308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring Feature Self-relation for Self-supervised Transformer 自监督变压器特征自关系的探索
IF 23.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2022-06-10 DOI: 10.48550/arXiv.2206.05184
Zhong-Yu Li, Shanghua Gao, Ming-Ming Cheng
Learning representations with self-supervision for convolutional networks (CNN) has been validated to be effective for vision tasks. As an alternative to CNN, vision transformers (ViT) have strong representation ability with spatial self-attention and channel-level feedforward networks. Recent works reveal that self-supervised learning helps unleash the great potential of ViT. Still, most works follow self-supervised strategies designed for CNN, e.g., instance-level discrimination of samples, but they ignore the properties of ViT. We observe that relational modeling on spatial and channel dimensions distinguishes ViT from other networks. To enforce this property, we explore the feature SElf-RElation (SERE) for training self-supervised ViT. Specifically, instead of conducting self-supervised learning solely on feature embeddings from multiple views, we utilize the feature self-relations, i.e., spatial/channel self-relations, for self-supervised learning. Self-relation based learning further enhances the relation modeling ability of ViT, resulting in stronger representations that stably improve performance on multiple downstream tasks. Our source code is publicly available.
卷积网络(CNN)的自监督学习表征已经被证明是有效的视觉任务。视觉变压器(vision transformer, ViT)作为CNN的替代品,具有较强的表征能力,具有空间自注意和通道级前馈网络。最近的研究表明,自我监督学习有助于释放ViT的巨大潜力。尽管如此,大多数工作仍然遵循为CNN设计的自监督策略,例如样本的实例级判别,但它们忽略了ViT的特性。我们观察到空间和通道维度上的关系建模将ViT与其他网络区分开来。为了强化这一特性,我们探索了用于训练自监督ViT的特征自关系(SERE)。具体来说,我们利用特征的自关系,即空间/通道的自关系进行自监督学习,而不是仅仅对多个视图的特征嵌入进行自监督学习。基于自关系的学习进一步增强了ViT的关系建模能力,产生了更强的表示,稳定地提高了多个下游任务的性能。我们的源代码是公开的。
{"title":"Exploring Feature Self-relation for Self-supervised Transformer","authors":"Zhong-Yu Li, Shanghua Gao, Ming-Ming Cheng","doi":"10.48550/arXiv.2206.05184","DOIUrl":"https://doi.org/10.48550/arXiv.2206.05184","url":null,"abstract":"Learning representations with self-supervision for convolutional networks (CNN) has been validated to be effective for vision tasks. As an alternative to CNN, vision transformers (ViT) have strong representation ability with spatial self-attention and channel-level feedforward networks. Recent works reveal that self-supervised learning helps unleash the great potential of ViT. Still, most works follow self-supervised strategies designed for CNN, e.g., instance-level discrimination of samples, but they ignore the properties of ViT. We observe that relational modeling on spatial and channel dimensions distinguishes ViT from other networks. To enforce this property, we explore the feature SElf-RElation (SERE) for training self-supervised ViT. Specifically, instead of conducting self-supervised learning solely on feature embeddings from multiple views, we utilize the feature self-relations, i.e., spatial/channel self-relations, for self-supervised learning. Self-relation based learning further enhances the relation modeling ability of ViT, resulting in stronger representations that stably improve performance on multiple downstream tasks. Our source code is publicly available.","PeriodicalId":13426,"journal":{"name":"IEEE Transactions on Pattern Analysis and Machine Intelligence","volume":" ","pages":""},"PeriodicalIF":23.6,"publicationDate":"2022-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44320055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Evaluating the Generalization Ability of Super-Resolution Networks 超分辨率网络泛化能力的评价
IF 23.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2022-05-14 DOI: 10.48550/arXiv.2205.07019
Yihao Liu, Hengyuan Zhao, Jinjin Gu, Y. Qiao, Chao Dong
Performance and generalization ability are two important aspects to evaluate the deep learning models. However, research on the generalization ability of Super-Resolution (SR) networks is currently absent. Assessing the generalization ability of deep models not only helps us to understand their intrinsic mechanisms, but also allows us to quantitatively measure their applicability boundaries, which is important for unrestricted real-world applications. To this end, we make the first attempt to propose a Generalization Assessment Index for SR networks, namely SRGA. SRGA exploits the statistical characteristics of the internal features of deep networks to measure the generalization ability. Specially, it is a non-parametric and non-learning metric. To better validate our method, we collect a patch-based image evaluation set (PIES) that includes both synthetic and real-world images, covering a wide range of degradations. With SRGA and PIES dataset, we benchmark existing SR models on the generalization ability. This work provides insights and tools for future research on model generalization in low-level vision.
性能和泛化能力是评价深度学习模型的两个重要方面。然而,目前对超分辨率(SR)网络的泛化能力的研究还很少。评估深度模型的泛化能力不仅有助于我们理解其内在机制,还可以定量测量其适用性边界,这对不受限制的现实世界应用很重要。为此,我们首次尝试提出SR网络的泛化评估指标,即SRGA。SRGA利用深度网络内部特征的统计特征来衡量泛化能力。特别地,它是一个非参数和非学习的度量。为了更好地验证我们的方法,我们收集了一个基于补丁的图像评估集(PIES),该集包括合成图像和真实世界的图像,涵盖了广泛的退化。利用SRGA和PIES数据集,我们对现有的SR模型的泛化能力进行了基准测试。这项工作为未来在低水平视觉中进行模型泛化的研究提供了见解和工具。
{"title":"Evaluating the Generalization Ability of Super-Resolution Networks","authors":"Yihao Liu, Hengyuan Zhao, Jinjin Gu, Y. Qiao, Chao Dong","doi":"10.48550/arXiv.2205.07019","DOIUrl":"https://doi.org/10.48550/arXiv.2205.07019","url":null,"abstract":"Performance and generalization ability are two important aspects to evaluate the deep learning models. However, research on the generalization ability of Super-Resolution (SR) networks is currently absent. Assessing the generalization ability of deep models not only helps us to understand their intrinsic mechanisms, but also allows us to quantitatively measure their applicability boundaries, which is important for unrestricted real-world applications. To this end, we make the first attempt to propose a Generalization Assessment Index for SR networks, namely SRGA. SRGA exploits the statistical characteristics of the internal features of deep networks to measure the generalization ability. Specially, it is a non-parametric and non-learning metric. To better validate our method, we collect a patch-based image evaluation set (PIES) that includes both synthetic and real-world images, covering a wide range of degradations. With SRGA and PIES dataset, we benchmark existing SR models on the generalization ability. This work provides insights and tools for future research on model generalization in low-level vision.","PeriodicalId":13426,"journal":{"name":"IEEE Transactions on Pattern Analysis and Machine Intelligence","volume":" ","pages":""},"PeriodicalIF":23.6,"publicationDate":"2022-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49321791","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Physics to the Rescue: Deep Non-line-of-sight Reconstruction for High-speed Imaging 物理救援:高速成像的深度非视距重建
IF 23.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2022-05-03 DOI: 10.48550/arXiv.2205.01679
Fangzhou Mu, Sicheng Mo, Jiayong Peng, Xiaochun Liu, J. Nam, S. Raghavan, A. Velten, Yin Li
Computational approach to imaging around the corner, or non-line-of-sight (NLOS) imaging, is becoming a reality thanks to major advances in imaging hardware and reconstruction algorithms. A recent development towards practical NLOS imaging, Nam et al. [1] demonstrated a high-speed non-confocal imaging system that operates at 5 Hz, 100x faster than the prior art. This enormous gain in acquisition rate, however, necessitates numerous approximations in light transport, breaking many existing NLOS reconstruction methods that assume an idealized image formation model. To bridge the gap, we present a novel deep model that incorporates the complementary physics priors of wave propagation and volume rendering into a neural network for high-quality and robust NLOS reconstruction. This orchestrated design regularizes the solution space by relaxing the image formation model, resulting in a deep model that generalizes well on real captures despite being exclusively trained on synthetic data. Further, we devise a unified learning framework that enables our model to be flexibly trained using diverse supervision signals, including target intensity images or even raw NLOS transient measurements. Once trained, our model renders both intensity and depth images at inference time in a single forward pass, capable of processing more than 5 captures per second on a high-end GPU. Through extensive qualitative and quantitative experiments, we show that our method outperforms prior physics and learning based approaches on both synthetic and real measurements. We anticipate that our method along with the fast capturing system will accelerate future development of NLOS imaging for real world applications that require high-speed imaging.
由于成像硬件和重建算法的重大进步,拐角成像或非视距成像(NLOS)的计算方法正在成为现实。Nam等人展示了一种高速非共聚焦成像系统,其工作频率为5hz,比现有技术快100倍。然而,这种获取率的巨大提高需要在光传输中进行大量近似,打破了许多假设理想图像形成模型的现有NLOS重建方法。为了弥补这一差距,我们提出了一种新的深度模型,该模型将波传播和体渲染的互补物理先验知识整合到神经网络中,以实现高质量和鲁棒的NLOS重建。这种精心设计通过放松图像形成模型来正则化解决方案空间,从而产生一个深度模型,尽管只在合成数据上进行训练,但它可以很好地泛化真实捕获。此外,我们设计了一个统一的学习框架,使我们的模型能够灵活地使用不同的监督信号进行训练,包括目标强度图像甚至原始NLOS瞬态测量。经过训练后,我们的模型在单个向前传递的推理时间内呈现强度和深度图像,能够在高端GPU上每秒处理超过5次捕获。通过广泛的定性和定量实验,我们表明我们的方法在合成和实际测量上都优于先前的基于物理和学习的方法。我们预计,我们的方法以及快速捕获系统将加速NLOS成像的未来发展,以满足需要高速成像的现实世界应用。
{"title":"Physics to the Rescue: Deep Non-line-of-sight Reconstruction for High-speed Imaging","authors":"Fangzhou Mu, Sicheng Mo, Jiayong Peng, Xiaochun Liu, J. Nam, S. Raghavan, A. Velten, Yin Li","doi":"10.48550/arXiv.2205.01679","DOIUrl":"https://doi.org/10.48550/arXiv.2205.01679","url":null,"abstract":"Computational approach to imaging around the corner, or non-line-of-sight (NLOS) imaging, is becoming a reality thanks to major advances in imaging hardware and reconstruction algorithms. A recent development towards practical NLOS imaging, Nam et al. [1] demonstrated a high-speed non-confocal imaging system that operates at 5 Hz, 100x faster than the prior art. This enormous gain in acquisition rate, however, necessitates numerous approximations in light transport, breaking many existing NLOS reconstruction methods that assume an idealized image formation model. To bridge the gap, we present a novel deep model that incorporates the complementary physics priors of wave propagation and volume rendering into a neural network for high-quality and robust NLOS reconstruction. This orchestrated design regularizes the solution space by relaxing the image formation model, resulting in a deep model that generalizes well on real captures despite being exclusively trained on synthetic data. Further, we devise a unified learning framework that enables our model to be flexibly trained using diverse supervision signals, including target intensity images or even raw NLOS transient measurements. Once trained, our model renders both intensity and depth images at inference time in a single forward pass, capable of processing more than 5 captures per second on a high-end GPU. Through extensive qualitative and quantitative experiments, we show that our method outperforms prior physics and learning based approaches on both synthetic and real measurements. We anticipate that our method along with the fast capturing system will accelerate future development of NLOS imaging for real world applications that require high-speed imaging.","PeriodicalId":13426,"journal":{"name":"IEEE Transactions on Pattern Analysis and Machine Intelligence","volume":" ","pages":""},"PeriodicalIF":23.6,"publicationDate":"2022-05-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42504585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Neural Maximum A Posteriori Estimation on Unpaired Data for Motion Deblurring 运动去模糊中未配对数据的神经最大后验估计
IF 23.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2022-04-26 DOI: 10.48550/arXiv.2204.12139
Youjian Zhang, Chaoyue Wang, D. Tao
Real-world dynamic scene deblurring has long been a challenging task since paired blurry-sharp training data is unavailable. Conventional Maximum A Posteriori estimation and deep learning-based deblurring methods are restricted by handcrafted priors and synthetic blurry-sharp training pairs respectively, thereby failing to generalize to real dynamic blurriness. To this end, we propose a Neural Maximum A Posteriori (NeurMAP) estimation framework for training neural networks to recover blind motion information and sharp content from unpaired data. The proposed NeruMAP consists of a motion estimation network and a deblurring network which are trained jointly to model the (re)blurring process (i.e. likelihood function). Meanwhile, the motion estimation network is trained to explore the motion information in images by applying implicit dynamic motion prior, and in return enforces the deblurring network training (i.e. providing sharp image prior). The proposed NeurMAP is an orthogonal approach to existing deblurring neural networks, and is the first framework that enables training image deblurring networks on unpaired datasets. Experiments demonstrate our superiority on both quantitative metrics and visual quality over State-of-the-art methods. Codes are available on https://github.com/yjzhang96/NeurMAP-deblur.
由于无法获得成对的模糊清晰训练数据,因此真实世界动态场景去模糊一直是一项具有挑战性的任务。传统的最大A后验估计和基于深度学习的去模糊方法分别受到手工先验和合成模糊-锐化训练对的限制,从而无法推广到真实的动态模糊。为此,我们提出了一种神经最大后验(NeurMAP)估计框架,用于训练神经网络,从未配对的数据中恢复盲运动信息和清晰内容。所提出的NeruMAP由运动估计网络和去模糊网络组成,它们被联合训练以对(再)模糊过程(即似然函数)进行建模。同时,训练运动估计网络,通过应用隐式动态运动先验来探索图像中的运动信息,并反过来加强去模糊网络训练(即提供清晰的图像先验)。所提出的NeurMAP是对现有去模糊神经网络的正交方法,是第一个能够在不成对的数据集上训练图像去模糊网络的框架。实验证明了我们在定量度量和视觉质量方面优于最先进的方法。代码可在https://github.com/yjzhang96/NeurMAP-deblur.
{"title":"Neural Maximum A Posteriori Estimation on Unpaired Data for Motion Deblurring","authors":"Youjian Zhang, Chaoyue Wang, D. Tao","doi":"10.48550/arXiv.2204.12139","DOIUrl":"https://doi.org/10.48550/arXiv.2204.12139","url":null,"abstract":"Real-world dynamic scene deblurring has long been a challenging task since paired blurry-sharp training data is unavailable. Conventional Maximum A Posteriori estimation and deep learning-based deblurring methods are restricted by handcrafted priors and synthetic blurry-sharp training pairs respectively, thereby failing to generalize to real dynamic blurriness. To this end, we propose a Neural Maximum A Posteriori (NeurMAP) estimation framework for training neural networks to recover blind motion information and sharp content from unpaired data. The proposed NeruMAP consists of a motion estimation network and a deblurring network which are trained jointly to model the (re)blurring process (i.e. likelihood function). Meanwhile, the motion estimation network is trained to explore the motion information in images by applying implicit dynamic motion prior, and in return enforces the deblurring network training (i.e. providing sharp image prior). The proposed NeurMAP is an orthogonal approach to existing deblurring neural networks, and is the first framework that enables training image deblurring networks on unpaired datasets. Experiments demonstrate our superiority on both quantitative metrics and visual quality over State-of-the-art methods. Codes are available on https://github.com/yjzhang96/NeurMAP-deblur.","PeriodicalId":13426,"journal":{"name":"IEEE Transactions on Pattern Analysis and Machine Intelligence","volume":" ","pages":""},"PeriodicalIF":23.6,"publicationDate":"2022-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44976215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
WebFace260M: A Benchmark for Million-Scale Deep Face Recognition WebFace260M:百万级深度人脸识别的基准
IF 23.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2022-04-21 DOI: 10.48550/arXiv.2204.10149
Zheng Hua Zhu, Guan Huang, Jiankang Deng, Yun Ye, Junjie Huang, Xinze Chen, Jiagang Zhu, Tian Yang, Dalong Du, Jiwen Lu, Jie Zhou
In this paper, we contribute a new million-scale recognition benchmark, containing uncurated 4M identities/260M faces (WebFace260M) and cleaned 2M identities/42M faces (WebFace42M) training data, as well as an elaborately designed time-constrained evaluation protocol. Firstly, we collect 4M name lists and download 260M faces from the Internet. Then, a Cleaning Automatically utilizing Self-Training pipeline is devised to purify the tremendous WebFace260M, which is efficient and scalable. To our best knowledge, the cleaned WebFace42M is the largest public face recognition training set in the community. Referring to practical deployments, Face Recognition under Inference Time conStraint (FRUITS) protocol and a new test set with rich attributes are constructed. Moreover, we gather a large-scale masked face sub-set for biometrics assessment under COVID-19. For a comprehensive evaluation of face matchers, three recognition tasks are performed under standard, masked and unbiased settings, respectively. Equipped with this benchmark, we delve into million-scale face recognition problems. Enabled by WebFace42M, we reduce 40% failure rate on the challenging IJB-C set and rank the 3rd among 430 entries on NIST-FRVT. Even 10% data (WebFace4M) shows superior performance compared with the public training set. The proposed benchmark shows enormous potential on standard, masked and unbiased face recognition scenarios.
在本文中,我们提供了一个新的百万级识别基准,包含未经整理的4M身份/260M张脸(WebFace260M)和清理过的2M身份/42M张脸(WebFace42M)训练数据,以及精心设计的时间约束评估协议。首先,我们从互联网上收集了4M个名单,下载了260M张面孔。然后,设计了一个利用自我训练的自动清洗管道来净化庞大的WebFace260M,该管道具有高效和可扩展性。据我们所知,清理后的WebFace42M是社区中最大的公共人脸识别训练集。结合实际部署,构造了基于推理时间约束的人脸识别协议(fruit)和一个新的富属性测试集。此外,我们收集了一个大规模的蒙面子集,用于COVID-19下的生物特征评估。为了对人脸匹配器进行综合评价,分别在标准、屏蔽和无偏设置下进行了三种识别任务。有了这个基准,我们就可以深入研究百万尺度的人脸识别问题。在WebFace42M的支持下,我们在具有挑战性的IJB-C集上降低了40%的故障率,在NIST-FRVT的430个参赛作品中排名第三。即使是10%的数据(WebFace4M)也显示出比公共训练集更好的性能。所提出的基准在标准、屏蔽和无偏人脸识别场景中显示出巨大的潜力。
{"title":"WebFace260M: A Benchmark for Million-Scale Deep Face Recognition","authors":"Zheng Hua Zhu, Guan Huang, Jiankang Deng, Yun Ye, Junjie Huang, Xinze Chen, Jiagang Zhu, Tian Yang, Dalong Du, Jiwen Lu, Jie Zhou","doi":"10.48550/arXiv.2204.10149","DOIUrl":"https://doi.org/10.48550/arXiv.2204.10149","url":null,"abstract":"In this paper, we contribute a new million-scale recognition benchmark, containing uncurated 4M identities/260M faces (WebFace260M) and cleaned 2M identities/42M faces (WebFace42M) training data, as well as an elaborately designed time-constrained evaluation protocol. Firstly, we collect 4M name lists and download 260M faces from the Internet. Then, a Cleaning Automatically utilizing Self-Training pipeline is devised to purify the tremendous WebFace260M, which is efficient and scalable. To our best knowledge, the cleaned WebFace42M is the largest public face recognition training set in the community. Referring to practical deployments, Face Recognition under Inference Time conStraint (FRUITS) protocol and a new test set with rich attributes are constructed. Moreover, we gather a large-scale masked face sub-set for biometrics assessment under COVID-19. For a comprehensive evaluation of face matchers, three recognition tasks are performed under standard, masked and unbiased settings, respectively. Equipped with this benchmark, we delve into million-scale face recognition problems. Enabled by WebFace42M, we reduce 40% failure rate on the challenging IJB-C set and rank the 3rd among 430 entries on NIST-FRVT. Even 10% data (WebFace4M) shows superior performance compared with the public training set. The proposed benchmark shows enormous potential on standard, masked and unbiased face recognition scenarios.","PeriodicalId":13426,"journal":{"name":"IEEE Transactions on Pattern Analysis and Machine Intelligence","volume":"PP 1","pages":"1-1"},"PeriodicalIF":23.6,"publicationDate":"2022-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41630541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Learning Compositional Representations for Effective Low-Shot Generalization 为有效的低镜头泛化学习组合表征
IF 23.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2022-04-17 DOI: 10.48550/arXiv.2204.08090
Samarth Mishra, Pengkai Zhu, Venkatesh Saligrama
We propose Recognition as Part Composition (RPC), an image encoding approach inspired by human cognition. It is based on the cognitive theory that humans recognize complex objects by components, and that they build a small compact vocabulary of concepts to represent each instance with. RPC encodes images by first decomposing them into salient parts, and then encoding each part as a mixture of a small number of prototypes, each representing a certain concept. We find that this type of learning inspired by human cognition can overcome hurdles faced by deep convolutional networks in low-shot generalization tasks, like zero-shot learning, few-shot learning and unsupervised domain adaptation. Furthermore, we find a classifier using an RPC image encoder is fairly robust to adversarial attacks, that deep neural networks are known to be prone to. Given that our image encoding principle is based on human cognition, one would expect the encodings to be interpretable by humans, which we find to be the case via crowd-sourcing experiments. Finally, we propose an application of these interpretable encodings in the form of generating synthetic attribute annotations for evaluating zero-shot learning methods on new datasets.
本文提出了一种受人类认知启发的图像编码方法——部分合成识别(RPC)。它基于认知理论,即人类通过组件识别复杂对象,并构建一个小而紧凑的概念词汇表来表示每个实例。RPC对图像进行编码,首先将其分解为显著部分,然后将每个部分编码为少量原型的混合物,每个原型代表一个特定的概念。我们发现这种受人类认知启发的学习可以克服深度卷积网络在低概率泛化任务中面临的障碍,如零概率学习、少概率学习和无监督域自适应。此外,我们发现使用RPC图像编码器的分类器对对抗性攻击具有相当的鲁棒性,而深度神经网络很容易受到对抗性攻击。考虑到我们的图像编码原理是基于人类的认知,人们会期望编码是人类可以解释的,我们通过众包实验发现了这种情况。最后,我们提出了这些可解释编码的应用,以生成综合属性注释的形式来评估新数据集上的零射击学习方法。
{"title":"Learning Compositional Representations for Effective Low-Shot Generalization","authors":"Samarth Mishra, Pengkai Zhu, Venkatesh Saligrama","doi":"10.48550/arXiv.2204.08090","DOIUrl":"https://doi.org/10.48550/arXiv.2204.08090","url":null,"abstract":"We propose Recognition as Part Composition (RPC), an image encoding approach inspired by human cognition. It is based on the cognitive theory that humans recognize complex objects by components, and that they build a small compact vocabulary of concepts to represent each instance with. RPC encodes images by first decomposing them into salient parts, and then encoding each part as a mixture of a small number of prototypes, each representing a certain concept. We find that this type of learning inspired by human cognition can overcome hurdles faced by deep convolutional networks in low-shot generalization tasks, like zero-shot learning, few-shot learning and unsupervised domain adaptation. Furthermore, we find a classifier using an RPC image encoder is fairly robust to adversarial attacks, that deep neural networks are known to be prone to. Given that our image encoding principle is based on human cognition, one would expect the encodings to be interpretable by humans, which we find to be the case via crowd-sourcing experiments. Finally, we propose an application of these interpretable encodings in the form of generating synthetic attribute annotations for evaluating zero-shot learning methods on new datasets.","PeriodicalId":13426,"journal":{"name":"IEEE Transactions on Pattern Analysis and Machine Intelligence","volume":" ","pages":""},"PeriodicalIF":23.6,"publicationDate":"2022-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46674884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MPS-NeRF: Generalizable 3D Human Rendering from Multiview Images MPS-NeRF:从多视图图像中通用的3D人体渲染
IF 23.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2022-03-31 DOI: 10.48550/arXiv.2203.16875
Xiangjun Gao, Jiaolong Yang, Jongyoo Kim, Sida Peng, Zicheng Liu, Xin Tong
There has been rapid progress recently on 3D human rendering, including novel view synthesis and pose animation, based on the advances of neural radiance fields (NeRF). However, most existing methods focus on person-specific training and their training typically requires multi-view videos. This paper deals with a new challenging task - rendering novel views and novel poses for a person unseen in training, using only multiview still images as input without videos. For this task, we propose a simple yet surprisingly effective method to train a generalizable NeRF with multiview images as conditional input. The key ingredient is a dedicated representation combining a canonical NeRF and a volume deformation scheme. Using a canonical space enables our method to learn shared properties of human and easily generalize to different people. Volume deformation is used to connect the canonical space with input and target images and query image features for radiance and density prediction. We leverage the parametric 3D human model fitted on the input images to derive the deformation, which works quite well in practice when combined with our canonical NeRF. The experiments on both real and synthetic data with the novel view synthesis and pose animation tasks collectively demonstrate the efficacy of our method.
近年来,在神经辐射场(NeRF)的基础上,三维人体绘制取得了快速进展,包括新颖的视图合成和姿态动画。然而,大多数现有的方法都侧重于针对个人的培训,并且他们的培训通常需要多视图视频。本文讨论了一项新的具有挑战性的任务——为训练中看不见的人呈现新颖的视图和新颖的姿势,只使用多视图静止图像作为输入,而不使用视频。对于这项任务,我们提出了一种简单但令人惊讶的有效方法来训练具有多视点图像作为条件输入的可推广NeRF。关键成分是结合了规范NeRF和体积变形方案的专用表示。使用规范空间使我们的方法能够学习人类的共同特性,并容易地推广到不同的人。体积变形用于将正则空间与输入图像和目标图像连接起来,并查询图像特征以进行辐射和密度预测。我们利用拟合在输入图像上的参数3D人体模型来推导变形,当与我们的标准NeRF相结合时,该模型在实践中运行得非常好。在真实数据和合成数据上进行的实验,以及新颖的视图合成和姿势动画任务,共同证明了我们方法的有效性。
{"title":"MPS-NeRF: Generalizable 3D Human Rendering from Multiview Images","authors":"Xiangjun Gao, Jiaolong Yang, Jongyoo Kim, Sida Peng, Zicheng Liu, Xin Tong","doi":"10.48550/arXiv.2203.16875","DOIUrl":"https://doi.org/10.48550/arXiv.2203.16875","url":null,"abstract":"There has been rapid progress recently on 3D human rendering, including novel view synthesis and pose animation, based on the advances of neural radiance fields (NeRF). However, most existing methods focus on person-specific training and their training typically requires multi-view videos. This paper deals with a new challenging task - rendering novel views and novel poses for a person unseen in training, using only multiview still images as input without videos. For this task, we propose a simple yet surprisingly effective method to train a generalizable NeRF with multiview images as conditional input. The key ingredient is a dedicated representation combining a canonical NeRF and a volume deformation scheme. Using a canonical space enables our method to learn shared properties of human and easily generalize to different people. Volume deformation is used to connect the canonical space with input and target images and query image features for radiance and density prediction. We leverage the parametric 3D human model fitted on the input images to derive the deformation, which works quite well in practice when combined with our canonical NeRF. The experiments on both real and synthetic data with the novel view synthesis and pose animation tasks collectively demonstrate the efficacy of our method.","PeriodicalId":13426,"journal":{"name":"IEEE Transactions on Pattern Analysis and Machine Intelligence","volume":" ","pages":""},"PeriodicalIF":23.6,"publicationDate":"2022-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42950994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
期刊
IEEE Transactions on Pattern Analysis and Machine Intelligence
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1