首页 > 最新文献

IEEE Transactions on Pattern Analysis and Machine Intelligence最新文献

英文 中文
MPS-NeRF: Generalizable 3D Human Rendering from Multiview Images MPS-NeRF:从多视图图像中通用的3D人体渲染
IF 23.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2022-03-31 DOI: 10.48550/arXiv.2203.16875
Xiangjun Gao, Jiaolong Yang, Jongyoo Kim, Sida Peng, Zicheng Liu, Xin Tong
There has been rapid progress recently on 3D human rendering, including novel view synthesis and pose animation, based on the advances of neural radiance fields (NeRF). However, most existing methods focus on person-specific training and their training typically requires multi-view videos. This paper deals with a new challenging task - rendering novel views and novel poses for a person unseen in training, using only multiview still images as input without videos. For this task, we propose a simple yet surprisingly effective method to train a generalizable NeRF with multiview images as conditional input. The key ingredient is a dedicated representation combining a canonical NeRF and a volume deformation scheme. Using a canonical space enables our method to learn shared properties of human and easily generalize to different people. Volume deformation is used to connect the canonical space with input and target images and query image features for radiance and density prediction. We leverage the parametric 3D human model fitted on the input images to derive the deformation, which works quite well in practice when combined with our canonical NeRF. The experiments on both real and synthetic data with the novel view synthesis and pose animation tasks collectively demonstrate the efficacy of our method.
近年来,在神经辐射场(NeRF)的基础上,三维人体绘制取得了快速进展,包括新颖的视图合成和姿态动画。然而,大多数现有的方法都侧重于针对个人的培训,并且他们的培训通常需要多视图视频。本文讨论了一项新的具有挑战性的任务——为训练中看不见的人呈现新颖的视图和新颖的姿势,只使用多视图静止图像作为输入,而不使用视频。对于这项任务,我们提出了一种简单但令人惊讶的有效方法来训练具有多视点图像作为条件输入的可推广NeRF。关键成分是结合了规范NeRF和体积变形方案的专用表示。使用规范空间使我们的方法能够学习人类的共同特性,并容易地推广到不同的人。体积变形用于将正则空间与输入图像和目标图像连接起来,并查询图像特征以进行辐射和密度预测。我们利用拟合在输入图像上的参数3D人体模型来推导变形,当与我们的标准NeRF相结合时,该模型在实践中运行得非常好。在真实数据和合成数据上进行的实验,以及新颖的视图合成和姿势动画任务,共同证明了我们方法的有效性。
{"title":"MPS-NeRF: Generalizable 3D Human Rendering from Multiview Images","authors":"Xiangjun Gao, Jiaolong Yang, Jongyoo Kim, Sida Peng, Zicheng Liu, Xin Tong","doi":"10.48550/arXiv.2203.16875","DOIUrl":"https://doi.org/10.48550/arXiv.2203.16875","url":null,"abstract":"There has been rapid progress recently on 3D human rendering, including novel view synthesis and pose animation, based on the advances of neural radiance fields (NeRF). However, most existing methods focus on person-specific training and their training typically requires multi-view videos. This paper deals with a new challenging task - rendering novel views and novel poses for a person unseen in training, using only multiview still images as input without videos. For this task, we propose a simple yet surprisingly effective method to train a generalizable NeRF with multiview images as conditional input. The key ingredient is a dedicated representation combining a canonical NeRF and a volume deformation scheme. Using a canonical space enables our method to learn shared properties of human and easily generalize to different people. Volume deformation is used to connect the canonical space with input and target images and query image features for radiance and density prediction. We leverage the parametric 3D human model fitted on the input images to derive the deformation, which works quite well in practice when combined with our canonical NeRF. The experiments on both real and synthetic data with the novel view synthesis and pose animation tasks collectively demonstrate the efficacy of our method.","PeriodicalId":13426,"journal":{"name":"IEEE Transactions on Pattern Analysis and Machine Intelligence","volume":" ","pages":""},"PeriodicalIF":23.6,"publicationDate":"2022-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42950994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Fine Detailed Texture Learning for 3D Meshes with Generative Models 精细纹理学习与生成模型的3D网格
IF 23.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2022-03-17 DOI: 10.48550/arXiv.2203.09362
A. Dundar, Jun Gao, Andrew Tao, Bryan Catanzaro
This paper presents a method to achieve fine detailed texture learning for 3D models that are reconstructed from both multi-view and single-view images. The framework is posed as an adaptation problem and is done progressively where in the first stage, we focus on learning accurate geometry, whereas in the second stage, we focus on learning the texture with a generative adversarial network. The contributions of the paper are in the generative learning pipeline where we propose two improvements. First, since the learned textures should be spatially aligned, we propose an attention mechanism that relies on the learnable positions of pixels. Secondly, since discriminator receives aligned texture maps, we augment its input with a learnable embedding which improves the feedback to the generator. We achieve significant improvements on multi-view sequences from Tripod dataset as well as on single-view image datasets, Pascal 3D+ and CUB. We demonstrate that our method achieves superior 3D textured models compared to the previous works. Please visit our web-page for 3D visuals.: https://research.nvidia.com/labs/adlr/textured-3d-learning.
本文提出了一种对从多视图和单视图图像重建的三维模型实现精细纹理学习的方法。该框架被提出为一个适应问题,并逐步完成,在第一阶段,我们专注于学习精确的几何,而在第二阶段,我们侧重于使用生成对抗性网络学习纹理。本文的贡献是在生成学习管道中,我们提出了两个改进。首先,由于学习的纹理应该在空间上对齐,我们提出了一种依赖于像素的可学习位置的注意力机制。其次,由于鉴别器接收对齐的纹理图,我们用可学习嵌入来增加其输入,这改进了对生成器的反馈。我们对Tripod数据集的多视图序列以及单视图图像数据集Pascal 3D+和CUB进行了显著改进。我们证明,与之前的工作相比,我们的方法实现了卓越的3D纹理模型。请访问我们的网页获取3D视觉效果。:https://research.nvidia.com/labs/adlr/textured-3d-learning.
{"title":"Fine Detailed Texture Learning for 3D Meshes with Generative Models","authors":"A. Dundar, Jun Gao, Andrew Tao, Bryan Catanzaro","doi":"10.48550/arXiv.2203.09362","DOIUrl":"https://doi.org/10.48550/arXiv.2203.09362","url":null,"abstract":"This paper presents a method to achieve fine detailed texture learning for 3D models that are reconstructed from both multi-view and single-view images. The framework is posed as an adaptation problem and is done progressively where in the first stage, we focus on learning accurate geometry, whereas in the second stage, we focus on learning the texture with a generative adversarial network. The contributions of the paper are in the generative learning pipeline where we propose two improvements. First, since the learned textures should be spatially aligned, we propose an attention mechanism that relies on the learnable positions of pixels. Secondly, since discriminator receives aligned texture maps, we augment its input with a learnable embedding which improves the feedback to the generator. We achieve significant improvements on multi-view sequences from Tripod dataset as well as on single-view image datasets, Pascal 3D+ and CUB. We demonstrate that our method achieves superior 3D textured models compared to the previous works. Please visit our web-page for 3D visuals.: https://research.nvidia.com/labs/adlr/textured-3d-learning.","PeriodicalId":13426,"journal":{"name":"IEEE Transactions on Pattern Analysis and Machine Intelligence","volume":" ","pages":""},"PeriodicalIF":23.6,"publicationDate":"2022-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47061100","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
OPAL: Occlusion Pattern Aware Loss for Unsupervised Light Field Disparity Estimation OPAL:用于无监督光场视差估计的遮挡模式感知损失
IF 23.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2022-03-04 DOI: 10.48550/arXiv.2203.02231
Peng Li, Jiayin Zhao, Jingyao Wu, Chao Deng, Haoqian Wang, Tao Yu
Light field disparity estimation is an essential task in computer vision. Currently, supervised learning-based methods have achieved better performance than both unsupervised and optimization-based methods. However, the generalization capacity of supervised methods on real-world data, where no ground truth is available for training, remains limited. In this paper, we argue that unsupervised methods can achieve not only much stronger generalization capacity on real-world data but also more accurate disparity estimation results on synthetic datasets. To fulfill this goal, we present the Occlusion Pattern Aware Loss, named OPAL, which successfully extracts and encodes general occlusion patterns inherent in the light field for calculating the disparity loss. OPAL enables: i) accurate and robust disparity estimation by teaching the network how to handle occlusions effectively and ii) significantly reduced network parameters required for accurate and efficient estimation. We further propose an EPI transformer and a gradient-based refinement module for achieving more accurate and pixel-aligned disparity estimation results. Extensive experiments demonstrate our method not only significantly improves the accuracy compared with SOTA unsupervised methods, but also possesses stronger generalization capacity on real-world data compared with SOTA supervised methods. Last but not least, the network training and inference efficiency are much higher than existing learning-based methods. Our code will be made publicly available.
光场视差估计是计算机视觉中的一项重要任务。目前,基于监督学习的方法比基于无监督和优化的方法都取得了更好的性能。然而,在没有可用于训练的基本事实的情况下,监督方法对真实世界数据的泛化能力仍然有限。在本文中,我们认为无监督方法不仅可以在真实世界数据上获得更强的泛化能力,而且可以在合成数据集上获得更准确的视差估计结果。为了实现这一目标,我们提出了名为OPAL的遮挡模式感知损失,它成功地提取并编码了光场中固有的一般遮挡模式,用于计算视差损失。OPAL能够:i)通过教导网络如何有效地处理遮挡来实现准确和稳健的视差估计,以及ii)显著减少准确和高效估计所需的网络参数。我们进一步提出了一个EPI变换器和一个基于梯度的细化模块,用于实现更准确和像素对齐的视差估计结果。大量实验表明,与SOTA无监督方法相比,我们的方法不仅显著提高了精度,而且与SOTA有监督方法相比对真实世界数据具有更强的泛化能力。最后,网络训练和推理效率远高于现有的基于学习的方法。我们的代码将公开。
{"title":"OPAL: Occlusion Pattern Aware Loss for Unsupervised Light Field Disparity Estimation","authors":"Peng Li, Jiayin Zhao, Jingyao Wu, Chao Deng, Haoqian Wang, Tao Yu","doi":"10.48550/arXiv.2203.02231","DOIUrl":"https://doi.org/10.48550/arXiv.2203.02231","url":null,"abstract":"Light field disparity estimation is an essential task in computer vision. Currently, supervised learning-based methods have achieved better performance than both unsupervised and optimization-based methods. However, the generalization capacity of supervised methods on real-world data, where no ground truth is available for training, remains limited. In this paper, we argue that unsupervised methods can achieve not only much stronger generalization capacity on real-world data but also more accurate disparity estimation results on synthetic datasets. To fulfill this goal, we present the Occlusion Pattern Aware Loss, named OPAL, which successfully extracts and encodes general occlusion patterns inherent in the light field for calculating the disparity loss. OPAL enables: i) accurate and robust disparity estimation by teaching the network how to handle occlusions effectively and ii) significantly reduced network parameters required for accurate and efficient estimation. We further propose an EPI transformer and a gradient-based refinement module for achieving more accurate and pixel-aligned disparity estimation results. Extensive experiments demonstrate our method not only significantly improves the accuracy compared with SOTA unsupervised methods, but also possesses stronger generalization capacity on real-world data compared with SOTA supervised methods. Last but not least, the network training and inference efficiency are much higher than existing learning-based methods. Our code will be made publicly available.","PeriodicalId":13426,"journal":{"name":"IEEE Transactions on Pattern Analysis and Machine Intelligence","volume":"67 40","pages":""},"PeriodicalIF":23.6,"publicationDate":"2022-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41284972","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Local and Global GANs with Semantic-Aware Upsampling for Image Generation 用于图像生成的具有语义感知上采样的局部和全局GANs
IF 23.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2022-02-28 DOI: 10.48550/arXiv.2203.00047
Hao Tang, Ling Shao, Philip H. S. Torr, N. Sebe
In this paper, we address the task of semantic-guided image generation. One challenge common to most existing image-level generation methods is difficulty in generating small objects and detailed local textures. To tackle this issue, in this work we consider generating images using local context. As such, we design a local class-specific generative network using semantic maps as guidance, which separately constructs and learns subgenerators for different classes, enabling it to capture finer details. To learn more discriminative class-specific feature representations for the local generation, we also propose a novel classification module. To combine the advantages of both global image-level and local class-specific generation, a joint generation network is designed with an attention fusion module and a dual-discriminator structure embedded. Lastly, we propose a novel semantic-aware upsampling method, which has a larger receptive field and can take far-away pixels that are semantically related for feature upsampling, enabling it to better preserve semantic consistency for instances with the same semantic labels. Extensive experiments on two image generation tasks show the superior performance of the proposed method. State-of-the-art results are established by large margins on both tasks and on nine challenging public benchmarks.
在本文中,我们讨论了语义引导图像生成的任务。大多数现有图像级生成方法的一个常见挑战是难以生成小对象和详细的局部纹理。为了解决这个问题,在这项工作中,我们考虑使用本地上下文生成图像。因此,我们使用语义图作为指导,设计了一个局部类特定的生成网络,该网络分别构建和学习不同类的子生成器,使其能够捕捉更精细的细节。为了学习用于局部生成的更具判别性的类特定特征表示,我们还提出了一个新的分类模块。为了结合全局图像级和局部类特定生成的优势,设计了一个嵌入注意力融合模块和双鉴别器结构的联合生成网络。最后,我们提出了一种新的语义感知上采样方法,该方法具有更大的感受野,可以带走语义相关的像素进行特征上采样,使其能够更好地保持具有相同语义标签的实例的语义一致性。在两个图像生成任务上的大量实验表明了该方法的优越性能。在这两项任务和九项具有挑战性的公共基准上,都以很大的优势取得了最先进的成果。
{"title":"Local and Global GANs with Semantic-Aware Upsampling for Image Generation","authors":"Hao Tang, Ling Shao, Philip H. S. Torr, N. Sebe","doi":"10.48550/arXiv.2203.00047","DOIUrl":"https://doi.org/10.48550/arXiv.2203.00047","url":null,"abstract":"In this paper, we address the task of semantic-guided image generation. One challenge common to most existing image-level generation methods is difficulty in generating small objects and detailed local textures. To tackle this issue, in this work we consider generating images using local context. As such, we design a local class-specific generative network using semantic maps as guidance, which separately constructs and learns subgenerators for different classes, enabling it to capture finer details. To learn more discriminative class-specific feature representations for the local generation, we also propose a novel classification module. To combine the advantages of both global image-level and local class-specific generation, a joint generation network is designed with an attention fusion module and a dual-discriminator structure embedded. Lastly, we propose a novel semantic-aware upsampling method, which has a larger receptive field and can take far-away pixels that are semantically related for feature upsampling, enabling it to better preserve semantic consistency for instances with the same semantic labels. Extensive experiments on two image generation tasks show the superior performance of the proposed method. State-of-the-art results are established by large margins on both tasks and on nine challenging public benchmarks.","PeriodicalId":13426,"journal":{"name":"IEEE Transactions on Pattern Analysis and Machine Intelligence","volume":" ","pages":"1-1"},"PeriodicalIF":23.6,"publicationDate":"2022-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42133261","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Multimodal Image Synthesis and Editing: A Survey 多模态图像合成与编辑:综述
IF 23.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2022-02-04 DOI: 10.31237/osf.io/24bhm
Fangneng Zhan, Yingchen Yu, Rongliang Wu, Jiahui Zhang, Shijian Lu
As information exists in various modalities in real world, effective interaction and fusion among multimodal information plays a key role for the creation and perception of multimodal data in computer vision and deep learning research. With superb power in modeling the interaction among multimodal information, multimodal image synthesis and editing has become a hot research topic in recent years. Instead of providing explicit guidance for network training, multimodal guidance offers intuitive and flexible means for image synthesis and editing. On the other hand, this field is also facing several challenges in alignment of multimodal features, synthesis of high-resolution images, faithful evaluation metrics, etc. In this survey, we comprehensively contextualize the advance of the recent multimodal image synthesis and editing and formulate taxonomies according to data modalities and model types. We start with an introduction to different guidance modalities in image synthesis and editing, and then describe multimodal image synthesis and editing approaches extensively according to their model types. After that, we describe benchmark datasets and evaluation metrics as well as corresponding experimental results. Finally, we provide insights about the current research challenges and possible directions for future research. A project associated with this survey is available at https://github.com/fnzhan/MISE.
由于信息存在于现实世界中的各种模式中,多模式信息之间的有效交互和融合在计算机视觉和深度学习研究中对多模式数据的创建和感知起着关键作用。多模式图像合成与编辑在建模多模式信息之间的交互方面具有强大的能力,近年来已成为研究热点。多模式指导为图像合成和编辑提供了直观而灵活的手段,而不是为网络训练提供明确的指导。另一方面,该领域在多模态特征的对齐、高分辨率图像的合成、忠实的评估指标等方面也面临着一些挑战。在本次调查中,我们全面了解了最近多模态图像合成和编辑的进展,并根据数据模态和模型类型制定了分类法。我们首先介绍了图像合成和编辑中的不同引导模式,然后根据其模型类型广泛描述了多模式图像合成和剪辑方法。然后,我们描述了基准数据集和评估指标以及相应的实验结果。最后,我们对当前的研究挑战和未来研究的可能方向提供了见解。与此调查相关的项目可在https://github.com/fnzhan/MISE.
{"title":"Multimodal Image Synthesis and Editing: A Survey","authors":"Fangneng Zhan, Yingchen Yu, Rongliang Wu, Jiahui Zhang, Shijian Lu","doi":"10.31237/osf.io/24bhm","DOIUrl":"https://doi.org/10.31237/osf.io/24bhm","url":null,"abstract":"As information exists in various modalities in real world, effective interaction and fusion among multimodal information plays a key role for the creation and perception of multimodal data in computer vision and deep learning research. With superb power in modeling the interaction among multimodal information, multimodal image synthesis and editing has become a hot research topic in recent years. Instead of providing explicit guidance for network training, multimodal guidance offers intuitive and flexible means for image synthesis and editing. On the other hand, this field is also facing several challenges in alignment of multimodal features, synthesis of high-resolution images, faithful evaluation metrics, etc. In this survey, we comprehensively contextualize the advance of the recent multimodal image synthesis and editing and formulate taxonomies according to data modalities and model types. We start with an introduction to different guidance modalities in image synthesis and editing, and then describe multimodal image synthesis and editing approaches extensively according to their model types. After that, we describe benchmark datasets and evaluation metrics as well as corresponding experimental results. Finally, we provide insights about the current research challenges and possible directions for future research. A project associated with this survey is available at https://github.com/fnzhan/MISE.","PeriodicalId":13426,"journal":{"name":"IEEE Transactions on Pattern Analysis and Machine Intelligence","volume":" ","pages":""},"PeriodicalIF":23.6,"publicationDate":"2022-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45275345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 30
Novelty Detection and Online Learning for Chunk Data Streams. 块数据流的新颖性检测与在线学习。
IF 23.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2021-07-01 Epub Date: 2021-06-08 DOI: 10.1109/TPAMI.2020.2965531
Yi Wang, Yi Ding, Xiangjian He, Xin Fan, Chi Lin, Fengqi Li, Tianzhu Wang, Zhongxuan Luo, Jiebo Luo

Datastream analysis aims at extracting discriminative information for classification from continuously incoming samples. It is extremely challenging to detect novel data while incrementally updating the model efficiently and stably, especially for high-dimensional and/or large-scale data streams. This paper proposes an efficient framework for novelty detection and incremental learning for unlabeled chunk data streams. First, an accurate factorization-free kernel discriminative analysis (FKDA-X) is put forward through solving a linear system in the kernel space. FKDA-X produces a Reproducing Kernel Hilbert Space (RKHS), in which unlabeled chunk data can be detected and classified by multiple known-classes in a single decision model with a deterministic classification boundary. Moreover, based on FKDA-X, two optimal methods FKDA-CX and FKDA-C are proposed. FKDA-CX uses the micro-cluster centers of original data as the input to achieve excellent performance in novelty detection. FKDA-C and incremental FKDA-C (IFKDA-C) using the class centers of original data as their input have extremely fast speed in online learning. Theoretical analysis and experimental validation on under-sampled and large-scale real-world datasets demonstrate that the proposed algorithms make it possible to learn unlabeled chunk data streams with significantly lower computational costs and comparable accuracies than the state-of-the-art approaches.

数据流分析的目的是从连续输入的样本中提取判别信息进行分类。在有效和稳定地增量更新模型的同时检测新数据是极具挑战性的,特别是对于高维和/或大规模数据流。本文提出了一种有效的无标记块数据流新颖性检测和增量学习框架。首先,通过求解一个线性系统的核空间,提出了一种精确的不分解核判别分析(FKDA-X)。FKDA-X产生了一个再现核希尔伯特空间(RKHS),在该空间中,未标记的块数据可以在具有确定性分类边界的单个决策模型中被多个已知类检测和分类。在FKDA-X的基础上,提出了两种优化方法FKDA-CX和FKDA-C。FKDA-CX利用原始数据的微聚类中心作为输入,实现了优异的新颖性检测性能。使用原始数据的类中心作为输入的FKDA-C和增量FKDA-C (IFKDA-C)在在线学习中具有极快的速度。对低采样和大规模真实世界数据集的理论分析和实验验证表明,所提出的算法可以学习未标记的块数据流,其计算成本显著降低,精度与最先进的方法相当。
{"title":"Novelty Detection and Online Learning for Chunk Data Streams.","authors":"Yi Wang,&nbsp;Yi Ding,&nbsp;Xiangjian He,&nbsp;Xin Fan,&nbsp;Chi Lin,&nbsp;Fengqi Li,&nbsp;Tianzhu Wang,&nbsp;Zhongxuan Luo,&nbsp;Jiebo Luo","doi":"10.1109/TPAMI.2020.2965531","DOIUrl":"https://doi.org/10.1109/TPAMI.2020.2965531","url":null,"abstract":"<p><p>Datastream analysis aims at extracting discriminative information for classification from continuously incoming samples. It is extremely challenging to detect novel data while incrementally updating the model efficiently and stably, especially for high-dimensional and/or large-scale data streams. This paper proposes an efficient framework for novelty detection and incremental learning for unlabeled chunk data streams. First, an accurate factorization-free kernel discriminative analysis (FKDA-X) is put forward through solving a linear system in the kernel space. FKDA-X produces a Reproducing Kernel Hilbert Space (RKHS), in which unlabeled chunk data can be detected and classified by multiple known-classes in a single decision model with a deterministic classification boundary. Moreover, based on FKDA-X, two optimal methods FKDA-CX and FKDA-C are proposed. FKDA-CX uses the micro-cluster centers of original data as the input to achieve excellent performance in novelty detection. FKDA-C and incremental FKDA-C (IFKDA-C) using the class centers of original data as their input have extremely fast speed in online learning. Theoretical analysis and experimental validation on under-sampled and large-scale real-world datasets demonstrate that the proposed algorithms make it possible to learn unlabeled chunk data streams with significantly lower computational costs and comparable accuracies than the state-of-the-art approaches.</p>","PeriodicalId":13426,"journal":{"name":"IEEE Transactions on Pattern Analysis and Machine Intelligence","volume":"43 7","pages":"2400-2412"},"PeriodicalIF":23.6,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TPAMI.2020.2965531","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37544410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Paying Attention to Video Object Pattern Understanding. 关注视频对象模式理解。
IF 23.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2021-07-01 Epub Date: 2021-06-08 DOI: 10.1109/TPAMI.2020.2966453
Wenguan Wang, Jianbing Shen, Xiankai Lu, Steven C H Hoi, Haibin Ling

This paper conducts a systematic study on the role of visual attention in video object pattern understanding. By elaborately annotating three popular video segmentation datasets (DAVIS 16, Youtube-Objects, and SegTrack V2) with dynamic eye-tracking data in the unsupervised video object segmentation (UVOS) setting. For the first time, we quantitatively verified the high consistency of visual attention behavior among human observers, and found strong correlation between human attention and explicit primary object judgments during dynamic, task-driven viewing. Such novel observations provide an in-depth insight of the underlying rationale behind video object pattens. Inspired by these findings, we decouple UVOS into two sub-tasks: UVOS-driven Dynamic Visual Attention Prediction (DVAP) in spatiotemporal domain, and Attention-Guided Object Segmentation (AGOS) in spatial domain. Our UVOS solution enjoys three major advantages: 1) modular training without using expensive video segmentation annotations, instead, using more affordable dynamic fixation data to train the initial video attention module and using existing fixation-segmentation paired static/image data to train the subsequent segmentation module; 2) comprehensive foreground understanding through multi-source learning; and 3) additional interpretability from the biologically-inspired and assessable attention. Experiments on four popular benchmarks show that, even without using expensive video object mask annotations, our model achieves compelling performance compared with state-of-the-arts and enjoys fast processing speed (10 fps on a single GPU). Our collected eye-tracking data and algorithm implementations have been made publicly available at https://github.com/wenguanwang/AGS.

本文对视觉注意在视频对象模式理解中的作用进行了系统的研究。通过在无监督视频对象分割(UVOS)设置中使用动态眼动追踪数据精心注释三个流行的视频分割数据集(DAVIS 16, Youtube-Objects和SegTrack V2)。我们首次定量验证了人类观察者之间视觉注意行为的高度一致性,并发现在动态、任务驱动的观看过程中,人类注意力与显性主要客体判断之间存在很强的相关性。这种新颖的观察为视频对象模式背后的基本原理提供了深入的见解。受这些发现的启发,我们将UVOS分解为两个子任务:UVOS驱动的动态视觉注意力预测(DVAP)在时空域和注意力引导的目标分割(AGOS)在空间域。我们的UVOS解决方案具有三大优势:1)模块化训练,无需使用昂贵的视频分割注释,而是使用更实惠的动态注视数据来训练初始视频注意力模块,并使用现有的注视分割匹配静态/图像数据来训练后续的分割模块;2)通过多源学习全面了解前景;3)从生物启发和可评估的关注中获得额外的可解释性。在四个流行的基准测试上的实验表明,即使没有使用昂贵的视频对象掩码注释,我们的模型与最先进的性能相比也取得了令人信服的性能,并且具有快速的处理速度(单个GPU上10 fps)。我们收集的眼球追踪数据和算法实现已经在https://github.com/wenguanwang/AGS上公开发布。
{"title":"Paying Attention to Video Object Pattern Understanding.","authors":"Wenguan Wang,&nbsp;Jianbing Shen,&nbsp;Xiankai Lu,&nbsp;Steven C H Hoi,&nbsp;Haibin Ling","doi":"10.1109/TPAMI.2020.2966453","DOIUrl":"https://doi.org/10.1109/TPAMI.2020.2966453","url":null,"abstract":"<p><p>This paper conducts a systematic study on the role of visual attention in video object pattern understanding. By elaborately annotating three popular video segmentation datasets (DAVIS <sub>16</sub>, Youtube-Objects, and SegTrack <sub>V<sub>2</sub></sub>) with dynamic eye-tracking data in the unsupervised video object segmentation (UVOS) setting. For the first time, we quantitatively verified the high consistency of visual attention behavior among human observers, and found strong correlation between human attention and explicit primary object judgments during dynamic, task-driven viewing. Such novel observations provide an in-depth insight of the underlying rationale behind video object pattens. Inspired by these findings, we decouple UVOS into two sub-tasks: UVOS-driven Dynamic Visual Attention Prediction (DVAP) in spatiotemporal domain, and Attention-Guided Object Segmentation (AGOS) in spatial domain. Our UVOS solution enjoys three major advantages: 1) modular training without using expensive video segmentation annotations, instead, using more affordable dynamic fixation data to train the initial video attention module and using existing fixation-segmentation paired static/image data to train the subsequent segmentation module; 2) comprehensive foreground understanding through multi-source learning; and 3) additional interpretability from the biologically-inspired and assessable attention. Experiments on four popular benchmarks show that, even without using expensive video object mask annotations, our model achieves compelling performance compared with state-of-the-arts and enjoys fast processing speed (10 fps on a single GPU). Our collected eye-tracking data and algorithm implementations have been made publicly available at https://github.com/wenguanwang/AGS.</p>","PeriodicalId":13426,"journal":{"name":"IEEE Transactions on Pattern Analysis and Machine Intelligence","volume":"43 7","pages":"2413-2428"},"PeriodicalIF":23.6,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TPAMI.2020.2966453","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37544412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 73
Perceptual Texture Similarity Estimation: An Evaluation of Computational Features. 感知纹理相似度估计:一种计算特征的评估。
IF 23.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2021-07-01 Epub Date: 2021-06-08 DOI: 10.1109/TPAMI.2020.2964533
Xinghui Dong, Junyu Dong, Mike J Chantler

Estimation of texture similarity is fundamental to many material recognition tasks. This study uses fine-grained human perceptual similarity ground-truth to provide a comprehensive evaluation of 51 texture feature sets. We conduct two types of evaluation and both show that these features do not estimate similarity well when compared against human agreement rates, but that performances are improved when the features are combined using a Random Forest. Using a simple two-stage statistical model we show that few of the features capture long-range aperiodic relationships. We perform two psychophysical experiments which indicate that long-range interactions do provide humans with important cues for estimating texture similarity. This motivates an extension of the study to include Convolutional Neural Networks (CNNs) as they enable arbitrary features of large spatial extent to be learnt. Our conclusions derived from the use of two pre-trained CNNs are: that the large spatial extent exploited by the networks' top convolutional and first fully-connected layers, together with the use of large numbers of filters, confers significant advantage for estimation of perceptual texture similarity.

纹理相似度估计是许多材料识别任务的基础。本研究使用细粒度人类感知相似度基础真值对51个纹理特征集进行综合评价。我们进行了两种类型的评估,两者都表明,与人类的一致性率相比,这些特征不能很好地估计相似性,但是当使用随机森林将这些特征组合在一起时,性能得到了改善。使用一个简单的两阶段统计模型,我们表明很少有特征捕捉到长期的非周期关系。我们进行了两个心理物理实验,表明远程交互确实为人类提供了估计纹理相似性的重要线索。这激发了研究的扩展,包括卷积神经网络(cnn),因为它们可以学习大空间范围的任意特征。我们通过使用两个预训练的cnn得出的结论是:网络的顶部卷积层和第一层完全连接层所利用的大空间范围,以及大量过滤器的使用,为估计感知纹理相似性提供了显著的优势。
{"title":"Perceptual Texture Similarity Estimation: An Evaluation of Computational Features.","authors":"Xinghui Dong,&nbsp;Junyu Dong,&nbsp;Mike J Chantler","doi":"10.1109/TPAMI.2020.2964533","DOIUrl":"https://doi.org/10.1109/TPAMI.2020.2964533","url":null,"abstract":"<p><p>Estimation of texture similarity is fundamental to many material recognition tasks. This study uses fine-grained human perceptual similarity ground-truth to provide a comprehensive evaluation of 51 texture feature sets. We conduct two types of evaluation and both show that these features do not estimate similarity well when compared against human agreement rates, but that performances are improved when the features are combined using a Random Forest. Using a simple two-stage statistical model we show that few of the features capture long-range aperiodic relationships. We perform two psychophysical experiments which indicate that long-range interactions do provide humans with important cues for estimating texture similarity. This motivates an extension of the study to include Convolutional Neural Networks (CNNs) as they enable arbitrary features of large spatial extent to be learnt. Our conclusions derived from the use of two pre-trained CNNs are: that the large spatial extent exploited by the networks' top convolutional and first fully-connected layers, together with the use of large numbers of filters, confers significant advantage for estimation of perceptual texture similarity.</p>","PeriodicalId":13426,"journal":{"name":"IEEE Transactions on Pattern Analysis and Machine Intelligence","volume":"43 7","pages":"2429-2448"},"PeriodicalIF":23.6,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TPAMI.2020.2964533","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37549404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Physics-Based Generative Adversarial Models for Image Restoration and Beyond. 基于物理的图像恢复生成对抗模型及其应用。
IF 23.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2021-07-01 Epub Date: 2021-06-08 DOI: 10.1109/TPAMI.2020.2969348
Jinshan Pan, Jiangxin Dong, Yang Liu, Jiawei Zhang, Jimmy Ren, Jinhui Tang, Yu-Wing Tai, Ming-Hsuan Yang

We present an algorithm to directly solve numerous image restoration problems (e.g., image deblurring, image dehazing, and image deraining). These problems are ill-posed, and the common assumptions for existing methods are usually based on heuristic image priors. In this paper, we show that these problems can be solved by generative models with adversarial learning. However, a straightforward formulation based on a straightforward generative adversarial network (GAN) does not perform well in these tasks, and some structures of the estimated images are usually not preserved well. Motivated by an interesting observation that the estimated results should be consistent with the observed inputs under the physics models, we propose an algorithm that guides the estimation process of a specific task within the GAN framework. The proposed model is trained in an end-to-end fashion and can be applied to a variety of image restoration and low-level vision problems. Extensive experiments demonstrate that the proposed method performs favorably against state-of-the-art algorithms.

我们提出了一种直接解决许多图像恢复问题的算法(例如,图像去模糊、图像去雾和图像脱噪)。这些问题是不适定的,现有方法的常见假设通常是基于启发式图像先验。在本文中,我们证明了这些问题可以通过具有对抗学习的生成模型来解决。然而,基于直接生成对抗网络(GAN)的直接公式在这些任务中表现不佳,并且估计图像的一些结构通常不能很好地保留。由于一个有趣的观察结果,即估计结果应该与物理模型下观察到的输入一致,我们提出了一种算法,该算法指导GAN框架内特定任务的估计过程。所提出的模型以端到端方式进行训练,可以应用于各种图像恢复和低级视觉问题。大量的实验表明,该方法优于最先进的算法。
{"title":"Physics-Based Generative Adversarial Models for Image Restoration and Beyond.","authors":"Jinshan Pan,&nbsp;Jiangxin Dong,&nbsp;Yang Liu,&nbsp;Jiawei Zhang,&nbsp;Jimmy Ren,&nbsp;Jinhui Tang,&nbsp;Yu-Wing Tai,&nbsp;Ming-Hsuan Yang","doi":"10.1109/TPAMI.2020.2969348","DOIUrl":"https://doi.org/10.1109/TPAMI.2020.2969348","url":null,"abstract":"<p><p>We present an algorithm to directly solve numerous image restoration problems (e.g., image deblurring, image dehazing, and image deraining). These problems are ill-posed, and the common assumptions for existing methods are usually based on heuristic image priors. In this paper, we show that these problems can be solved by generative models with adversarial learning. However, a straightforward formulation based on a straightforward generative adversarial network (GAN) does not perform well in these tasks, and some structures of the estimated images are usually not preserved well. Motivated by an interesting observation that the estimated results should be consistent with the observed inputs under the physics models, we propose an algorithm that guides the estimation process of a specific task within the GAN framework. The proposed model is trained in an end-to-end fashion and can be applied to a variety of image restoration and low-level vision problems. Extensive experiments demonstrate that the proposed method performs favorably against state-of-the-art algorithms.</p>","PeriodicalId":13426,"journal":{"name":"IEEE Transactions on Pattern Analysis and Machine Intelligence","volume":"43 7","pages":"2449-2462"},"PeriodicalIF":23.6,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TPAMI.2020.2969348","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37588971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 106
Residual Dense Network for Image Restoration. 残差密集网络图像恢复。
IF 23.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2021-07-01 Epub Date: 2021-06-08 DOI: 10.1109/TPAMI.2020.2968521
Yulun Zhang, Yapeng Tian, Yu Kong, Bineng Zhong, Yun Fu

Recently, deep convolutional neural network (CNN) has achieved great success for image restoration (IR) and provided hierarchical features at the same time. However, most deep CNN based IR models do not make full use of the hierarchical features from the original low-quality images; thereby, resulting in relatively-low performance. In this work, we propose a novel and efficient residual dense network (RDN) to address this problem in IR, by making a better tradeoff between efficiency and effectiveness in exploiting the hierarchical features from all the convolutional layers. Specifically, we propose residual dense block (RDB) to extract abundant local features via densely connected convolutional layers. RDB further allows direct connections from the state of preceding RDB to all the layers of current RDB, leading to a contiguous memory mechanism. To adaptively learn more effective features from preceding and current local features and stabilize the training of wider network, we proposed local feature fusion in RDB. After fully obtaining dense local features, we use global feature fusion to jointly and adaptively learn global hierarchical features in a holistic way. We demonstrate the effectiveness of RDN with several representative IR applications, single image super-resolution, Gaussian image denoising, image compression artifact reduction, and image deblurring. Experiments on benchmark and real-world datasets show that our RDN achieves favorable performance against state-of-the-art methods for each IR task quantitatively and visually.

近年来,深度卷积神经网络(CNN)在图像恢复(IR)方面取得了巨大成功,同时提供了层次特征。然而,大多数基于深度CNN的红外模型并没有充分利用原始低质量图像的分层特征;因此,导致相对较低的性能。在这项工作中,我们提出了一种新颖而高效的残差密集网络(RDN),通过在利用所有卷积层的分层特征的效率和有效性之间进行更好的权衡,来解决IR中的这个问题。具体来说,我们提出了残差密集块(RDB),通过密集连接的卷积层提取丰富的局部特征。RDB还允许从前一个RDB的状态直接连接到当前RDB的所有层,从而实现连续内存机制。为了自适应地从之前和当前的局部特征中学习到更多有效的特征,稳定更广泛网络的训练,我们提出了RDB中的局部特征融合。在充分获得密集的局部特征后,采用全局特征融合的方法,整体地联合自适应学习全局层次特征。我们通过几个代表性的红外应用,单图像超分辨率,高斯图像去噪,图像压缩伪影减少和图像去模糊来证明RDN的有效性。在基准和现实世界数据集上的实验表明,我们的RDN在定量和视觉上实现了对每个IR任务的最先进方法的良好性能。
{"title":"Residual Dense Network for Image Restoration.","authors":"Yulun Zhang,&nbsp;Yapeng Tian,&nbsp;Yu Kong,&nbsp;Bineng Zhong,&nbsp;Yun Fu","doi":"10.1109/TPAMI.2020.2968521","DOIUrl":"https://doi.org/10.1109/TPAMI.2020.2968521","url":null,"abstract":"<p><p>Recently, deep convolutional neural network (CNN) has achieved great success for image restoration (IR) and provided hierarchical features at the same time. However, most deep CNN based IR models do not make full use of the hierarchical features from the original low-quality images; thereby, resulting in relatively-low performance. In this work, we propose a novel and efficient residual dense network (RDN) to address this problem in IR, by making a better tradeoff between efficiency and effectiveness in exploiting the hierarchical features from all the convolutional layers. Specifically, we propose residual dense block (RDB) to extract abundant local features via densely connected convolutional layers. RDB further allows direct connections from the state of preceding RDB to all the layers of current RDB, leading to a contiguous memory mechanism. To adaptively learn more effective features from preceding and current local features and stabilize the training of wider network, we proposed local feature fusion in RDB. After fully obtaining dense local features, we use global feature fusion to jointly and adaptively learn global hierarchical features in a holistic way. We demonstrate the effectiveness of RDN with several representative IR applications, single image super-resolution, Gaussian image denoising, image compression artifact reduction, and image deblurring. Experiments on benchmark and real-world datasets show that our RDN achieves favorable performance against state-of-the-art methods for each IR task quantitatively and visually.</p>","PeriodicalId":13426,"journal":{"name":"IEEE Transactions on Pattern Analysis and Machine Intelligence","volume":"43 7","pages":"2480-2495"},"PeriodicalIF":23.6,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TPAMI.2020.2968521","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37583392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 533
期刊
IEEE Transactions on Pattern Analysis and Machine Intelligence
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1