首页 > 最新文献

IEEE Transactions on Image Processing最新文献

英文 中文
USOD10K: A New Benchmark Dataset for Underwater Salient Object Detection. USOD10K:水下突出物体检测的新基准数据集。
IF 10.6 1区 计算机科学 Q1 Computer Science Pub Date : 2023-04-14 DOI: 10.1109/TIP.2023.3266163
Lin Hong, Xin Wang, Gan Zhang, Ming Zhao

Underwater salient object detection (USOD) attracts increasing interest for its promising performance in various underwater visual tasks. However, USOD research is still in its early stages due to the lack of large-scale datasets within which salient objects are well-defined and pixel-wise annotated. To address this issue, this paper introduces a new dataset named USOD10K. It consists of 10,255 underwater images, covering 70 categories of salient objects in 12 different underwater scenes. In addition, salient object boundaries and depth maps of all images are provided in this dataset. The USOD10K is the first large-scale dataset in the USOD community, making a significant leap in diversity, complexity, and scalability. Secondly, a simple but strong baseline termed TC-USOD is designed for the USOD10K. The TC-USOD adopts a hybrid architecture based on an encoder-decoder design that leverages transformer and convolution as the basic computational building block of the encoder and decoder, respectively. Thirdly, we make a comprehensive summarization of 35 cutting-edge SOD/USOD methods and benchmark them over the existing USOD dataset and the USOD10K. The results show that our TC-USOD obtained superior performance on all datasets tested. Finally, several other use cases of the USOD10K are discussed, and future directions of USOD research are pointed out. This work will promote the development of the USOD research and facilitate further research on underwater visual tasks and visually-guided underwater robots. To pave the road in this research field, all the dataset, code, and benchmark results are publicly available: https://github.com/LinHong-HIT/USOD10K.

水下突出物体检测(USOD)因其在各种水下视觉任务中的良好表现而受到越来越多的关注。然而,由于缺乏对突出物体进行明确定义和像素注释的大规模数据集,水下突出物体检测研究仍处于早期阶段。为了解决这个问题,本文引入了一个名为 USOD10K 的新数据集。该数据集由 10,255 幅水下图像组成,涵盖 12 个不同水下场景中的 70 个突出物体类别。此外,该数据集还提供了所有图像的突出物体边界和深度图。USOD10K 是 USOD 社区的第一个大规模数据集,在多样性、复杂性和可扩展性方面实现了重大飞跃。其次,为 USOD10K 设计了一个简单但强大的基线,称为 TC-USOD。TC-USOD 采用基于编码器-解码器设计的混合架构,利用变压器和卷积分别作为编码器和解码器的基本计算构件。第三,我们对 35 种前沿 SOD/USOD 方法进行了全面总结,并在现有 USOD 数据集和 USOD10K 数据集上对这些方法进行了基准测试。结果表明,我们的 TC-USOD 在所有测试数据集上都取得了优异的性能。最后,讨论了 USOD10K 的其他几个使用案例,并指出了 USOD 研究的未来方向。这项工作将推动 USOD 研究的发展,促进水下视觉任务和视觉引导水下机器人的进一步研究。为了在这一研究领域铺平道路,所有数据集、代码和基准结果均可公开获取:https://github.com/LinHong-HIT/USOD10K。
{"title":"USOD10K: A New Benchmark Dataset for Underwater Salient Object Detection.","authors":"Lin Hong, Xin Wang, Gan Zhang, Ming Zhao","doi":"10.1109/TIP.2023.3266163","DOIUrl":"10.1109/TIP.2023.3266163","url":null,"abstract":"<p><p>Underwater salient object detection (USOD) attracts increasing interest for its promising performance in various underwater visual tasks. However, USOD research is still in its early stages due to the lack of large-scale datasets within which salient objects are well-defined and pixel-wise annotated. To address this issue, this paper introduces a new dataset named USOD10K. It consists of 10,255 underwater images, covering 70 categories of salient objects in 12 different underwater scenes. In addition, salient object boundaries and depth maps of all images are provided in this dataset. The USOD10K is the first large-scale dataset in the USOD community, making a significant leap in diversity, complexity, and scalability. Secondly, a simple but strong baseline termed TC-USOD is designed for the USOD10K. The TC-USOD adopts a hybrid architecture based on an encoder-decoder design that leverages transformer and convolution as the basic computational building block of the encoder and decoder, respectively. Thirdly, we make a comprehensive summarization of 35 cutting-edge SOD/USOD methods and benchmark them over the existing USOD dataset and the USOD10K. The results show that our TC-USOD obtained superior performance on all datasets tested. Finally, several other use cases of the USOD10K are discussed, and future directions of USOD research are pointed out. This work will promote the development of the USOD research and facilitate further research on underwater visual tasks and visually-guided underwater robots. To pave the road in this research field, all the dataset, code, and benchmark results are publicly available: https://github.com/LinHong-HIT/USOD10K.</p>","PeriodicalId":13217,"journal":{"name":"IEEE Transactions on Image Processing","volume":null,"pages":null},"PeriodicalIF":10.6,"publicationDate":"2023-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9781338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DVMark: A Deep Multiscale Framework for Video Watermarking. DVMark:用于视频水印的深度多尺度框架。
IF 10.6 1区 计算机科学 Q1 Computer Science Pub Date : 2023-03-28 DOI: 10.1109/TIP.2023.3251737
Xiyang Luo, Yinxiao Li, Huiwen Chang, Ce Liu, Peyman Milanfar, Feng Yang

Video watermarking embeds a message into a cover video in an imperceptible manner, which can be retrieved even if the video undergoes certain modifications or distortions. Traditional watermarking methods are often manually designed for particular types of distortions and thus cannot simultaneously handle a broad spectrum of distortions. To this end, we propose a robust deep learning-based solution for video watermarking that is end-to-end trainable. Our model consists of a novel multiscale design where the watermarks are distributed across multiple spatial-temporal scales. Extensive evaluations on a wide variety of distortions show that our method outperforms traditional video watermarking methods as well as deep image watermarking models by a large margin. We further demonstrate the practicality of our method on a realistic video-editing application.

视频水印以不易察觉的方式将信息嵌入封面视频中,即使视频发生了某些修改或失真,也能被检索到。传统的水印方法通常是针对特定类型的失真手动设计的,因此无法同时处理各种失真。为此,我们提出了一种基于深度学习的稳健的视频水印解决方案,该方案可进行端到端训练。我们的模型包含一种新颖的多尺度设计,其中的水印分布在多个时空尺度上。对各种失真的广泛评估表明,我们的方法远远优于传统的视频水印方法和深度图像水印模型。我们还在实际的视频编辑应用中进一步证明了我们方法的实用性。
{"title":"DVMark: A Deep Multiscale Framework for Video Watermarking.","authors":"Xiyang Luo, Yinxiao Li, Huiwen Chang, Ce Liu, Peyman Milanfar, Feng Yang","doi":"10.1109/TIP.2023.3251737","DOIUrl":"10.1109/TIP.2023.3251737","url":null,"abstract":"<p><p>Video watermarking embeds a message into a cover video in an imperceptible manner, which can be retrieved even if the video undergoes certain modifications or distortions. Traditional watermarking methods are often manually designed for particular types of distortions and thus cannot simultaneously handle a broad spectrum of distortions. To this end, we propose a robust deep learning-based solution for video watermarking that is end-to-end trainable. Our model consists of a novel multiscale design where the watermarks are distributed across multiple spatial-temporal scales. Extensive evaluations on a wide variety of distortions show that our method outperforms traditional video watermarking methods as well as deep image watermarking models by a large margin. We further demonstrate the practicality of our method on a realistic video-editing application.</p>","PeriodicalId":13217,"journal":{"name":"IEEE Transactions on Image Processing","volume":null,"pages":null},"PeriodicalIF":10.6,"publicationDate":"2023-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9266354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Rain Removal From Light Field Images With 4D Convolution and Multi-Scale Gaussian Process 基于4D卷积和多尺度高斯过程的光场图像去雨
IF 10.6 1区 计算机科学 Q1 Computer Science Pub Date : 2022-08-16 DOI: 10.1109/TAP.2022.3218759
Zhiqiang Yuan, Jianhua Zhang, Yilin Ji, G. Pedersen, W. Fan
Existing deraining methods focus mainly on a single input image. However, with just a single input image, it is extremely difficult to accurately detect and remove rain streaks, in order to restore a rain-free image. In contrast, a light field image (LFI) embeds abundant 3D structure and texture information of the target scene by recording the direction and position of each incident ray via a plenoptic camera. LFIs are becoming popular in the computer vision and graphics communities. However, making full use of the abundant information available from LFIs, such as 2D array of sub-views and the disparity map of each sub-view, for effective rain removal is still a challenging problem. In this paper, we propose a novel method, 4D-MGP-SRRNet, for rain streak removal from LFIs. Our method takes as input all sub-views of a rainy LFI. To make full use of the LFI, it adopts 4D convolutional layers to simultaneously process all sub-views of the LFI. In the pipeline, the rain detection network, MGPDNet, with a novel Multi-scale Self-guided Gaussian Process (MSGP) module is proposed to detect high-resolution rain streaks from all sub-views of the input LFI at multi-scales. Semi-supervised learning is introduced for MSGP to accurately detect rain streaks by training on both virtual-world rainy LFIs and real-world rainy LFIs at multi-scales via computing pseudo ground truths for real-world rain streaks. We then feed all sub-views subtracting the predicted rain streaks into a 4D convolution-based Depth Estimation Residual Network (DERNet) to estimate the depth maps, which are later converted into fog maps. Finally, all sub-views concatenated with the corresponding rain streaks and fog maps are fed into a powerful rainy LFI restoring model based on the adversarial recurrent neural network to progressively eliminate rain streaks and recover the rain-free LFI. Extensive quantitative and qualitative evaluations conducted on both synthetic LFIs and real-world LFIs demonstrate the effectiveness of our proposed method.
现有的脱模方法主要集中在单个输入图像上。然而,只有一个单一的输入图像,很难准确地检测和去除雨纹,以恢复一个无雨的图像。而光场图像(LFI)通过全光相机记录每条入射光线的方向和位置,从而嵌入目标场景丰富的三维结构和纹理信息。lfi在计算机视觉和图形社区中越来越流行。然而,如何充分利用lfi提供的丰富信息,如子视图的二维阵列和每个子视图的视差图,进行有效的降雨清除仍然是一个具有挑战性的问题。在本文中,我们提出了一种新的方法4D-MGP-SRRNet,用于从lfi中去除雨纹。我们的方法将雨天LFI的所有子视图作为输入。为了充分利用LFI,采用4D卷积层同时处理LFI的所有子视图。在此基础上,提出了一种新的多尺度自引导高斯过程(MSGP)模块的降雨检测网络MGPDNet,用于从输入LFI的所有子视图中检测多尺度的高分辨率雨条。引入半监督学习的MSGP,通过对虚拟世界的雨天lfi和现实世界的雨天lfi在多尺度上进行训练,通过计算真实世界的雨条的伪地面真理,准确地检测雨条。然后,我们将减去预测雨条的所有子视图馈送到基于4D卷积的深度估计残差网络(DERNet)中,以估计深度图,然后将其转换为雾图。最后,将所有与相应雨条和雾图相关联的子视图输入到基于对抗递归神经网络的强大降雨LFI恢复模型中,逐步消除雨条,恢复无雨LFI。对合成lfi和实际lfi进行的大量定量和定性评估证明了我们提出的方法的有效性。
{"title":"Rain Removal From Light Field Images With 4D Convolution and Multi-Scale Gaussian Process","authors":"Zhiqiang Yuan, Jianhua Zhang, Yilin Ji, G. Pedersen, W. Fan","doi":"10.1109/TAP.2022.3218759","DOIUrl":"https://doi.org/10.1109/TAP.2022.3218759","url":null,"abstract":"Existing deraining methods focus mainly on a single input image. However, with just a single input image, it is extremely difficult to accurately detect and remove rain streaks, in order to restore a rain-free image. In contrast, a light field image (LFI) embeds abundant 3D structure and texture information of the target scene by recording the direction and position of each incident ray via a plenoptic camera. LFIs are becoming popular in the computer vision and graphics communities. However, making full use of the abundant information available from LFIs, such as 2D array of sub-views and the disparity map of each sub-view, for effective rain removal is still a challenging problem. In this paper, we propose a novel method, 4D-MGP-SRRNet, for rain streak removal from LFIs. Our method takes as input all sub-views of a rainy LFI. To make full use of the LFI, it adopts 4D convolutional layers to simultaneously process all sub-views of the LFI. In the pipeline, the rain detection network, MGPDNet, with a novel Multi-scale Self-guided Gaussian Process (MSGP) module is proposed to detect high-resolution rain streaks from all sub-views of the input LFI at multi-scales. Semi-supervised learning is introduced for MSGP to accurately detect rain streaks by training on both virtual-world rainy LFIs and real-world rainy LFIs at multi-scales via computing pseudo ground truths for real-world rain streaks. We then feed all sub-views subtracting the predicted rain streaks into a 4D convolution-based Depth Estimation Residual Network (DERNet) to estimate the depth maps, which are later converted into fog maps. Finally, all sub-views concatenated with the corresponding rain streaks and fog maps are fed into a powerful rainy LFI restoring model based on the adversarial recurrent neural network to progressively eliminate rain streaks and recover the rain-free LFI. Extensive quantitative and qualitative evaluations conducted on both synthetic LFIs and real-world LFIs demonstrate the effectiveness of our proposed method.","PeriodicalId":13217,"journal":{"name":"IEEE Transactions on Image Processing","volume":null,"pages":null},"PeriodicalIF":10.6,"publicationDate":"2022-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48830864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Designing an Illumination-Aware Network for Deep Image Relighting 设计一种用于深度图像重照明的照明感知网络
IF 10.6 1区 计算机科学 Q1 Computer Science Pub Date : 2022-07-21 DOI: 10.48550/arXiv.2207.10582
Zuo-Liang Zhu, Z. Li, Ruimao Zhang, Chunle Guo, Ming-Ming Cheng
Lighting is a determining factor in photography that affects the style, expression of emotion, and even quality of images. Creating or finding satisfying lighting conditions, in reality, is laborious and time-consuming, so it is of great value to develop a technology to manipulate illumination in an image as post-processing. Although previous works have explored techniques based on the physical viewpoint for relighting images, extensive supervisions and prior knowledge are necessary to generate reasonable images, restricting the generalization ability of these works. In contrast, we take the viewpoint of image-to-image translation and implicitly merge ideas of the conventional physical viewpoint. In this paper, we present an Illumination-Aware Network (IAN) which follows the guidance from hierarchical sampling to progressively relight a scene from a single image with high efficiency. In addition, an Illumination-Aware Residual Block (IARB) is designed to approximate the physical rendering process and to extract precise descriptors of light sources for further manipulations. We also introduce a depth-guided geometry encoder for acquiring valuable geometry- and structure-related representations once the depth information is available. Experimental results show that our proposed method produces better quantitative and qualitative relighting results than previous state-of-the-art methods. The code and models are publicly available on https://github.com/NK-CS-ZZL/IAN.
在摄影中,光线是一个决定因素,它会影响图像的风格、情感表达甚至质量。事实上,创建或找到令人满意的照明条件既费力又耗时,因此开发一种将图像中的照明作为后处理的技术具有重要价值。尽管以前的工作已经探索了基于物理视点的重新照明图像的技术,但为了生成合理的图像,需要广泛的监督和先验知识,这限制了这些工作的泛化能力。相反,我们采用图像到图像翻译的观点,并隐含地融合了传统物理观点的思想。在本文中,我们提出了一种照明感知网络(IAN),它遵循分层采样的指导,从单个图像中高效地逐步重新照明场景。此外,照明感知残差块(IARB)被设计为近似物理渲染过程,并提取光源的精确描述符用于进一步操作。我们还介绍了一种深度引导几何编码器,用于在深度信息可用时获取有价值的几何和结构相关表示。实验结果表明,与以往最先进的方法相比,我们提出的方法产生了更好的定量和定性再照明结果。代码和模型可在https://github.com/NK-CS-ZZL/IAN.
{"title":"Designing an Illumination-Aware Network for Deep Image Relighting","authors":"Zuo-Liang Zhu, Z. Li, Ruimao Zhang, Chunle Guo, Ming-Ming Cheng","doi":"10.48550/arXiv.2207.10582","DOIUrl":"https://doi.org/10.48550/arXiv.2207.10582","url":null,"abstract":"Lighting is a determining factor in photography that affects the style, expression of emotion, and even quality of images. Creating or finding satisfying lighting conditions, in reality, is laborious and time-consuming, so it is of great value to develop a technology to manipulate illumination in an image as post-processing. Although previous works have explored techniques based on the physical viewpoint for relighting images, extensive supervisions and prior knowledge are necessary to generate reasonable images, restricting the generalization ability of these works. In contrast, we take the viewpoint of image-to-image translation and implicitly merge ideas of the conventional physical viewpoint. In this paper, we present an Illumination-Aware Network (IAN) which follows the guidance from hierarchical sampling to progressively relight a scene from a single image with high efficiency. In addition, an Illumination-Aware Residual Block (IARB) is designed to approximate the physical rendering process and to extract precise descriptors of light sources for further manipulations. We also introduce a depth-guided geometry encoder for acquiring valuable geometry- and structure-related representations once the depth information is available. Experimental results show that our proposed method produces better quantitative and qualitative relighting results than previous state-of-the-art methods. The code and models are publicly available on https://github.com/NK-CS-ZZL/IAN.","PeriodicalId":13217,"journal":{"name":"IEEE Transactions on Image Processing","volume":null,"pages":null},"PeriodicalIF":10.6,"publicationDate":"2022-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49347222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Content-Aware Scalable Deep Compressed Sensing 内容感知可扩展深度压缩传感
IF 10.6 1区 计算机科学 Q1 Computer Science Pub Date : 2022-07-19 DOI: 10.48550/arXiv.2207.09313
Bin Chen, Jian Zhang
To more efficiently address image compressed sensing (CS) problems, we present a novel content-aware scalable network dubbed CASNet which collectively achieves adaptive sampling rate allocation, fine granular scalability and high-quality reconstruction. We first adopt a data-driven saliency detector to evaluate the importance of different image regions and propose a saliency-based block ratio aggregation (BRA) strategy for sampling rate allocation. A unified learnable generating matrix is then developed to produce sampling matrix of any CS ratio with an ordered structure. Being equipped with the optimization-inspired recovery subnet guided by saliency information and a multi-block training scheme preventing blocking artifacts, CASNet jointly reconstructs the image blocks sampled at various sampling rates with one single model. To accelerate training convergence and improve network robustness, we propose an SVD-based initialization scheme and a random transformation enhancement (RTE) strategy, which are extensible without introducing extra parameters. All the CASNet components can be combined and learned end-to-end. We further provide a four-stage implementation for evaluation and practical deployments. Experiments demonstrate that CASNet outperforms other CS networks by a large margin, validating the collaboration and mutual supports among its components and strategies. Codes are available at https://github.com/Guaishou74851/CASNet.
为了更有效地解决图像压缩感知(CS)问题,我们提出了一种新的内容感知可扩展网络CASNet,它共同实现了自适应采样率分配、细粒度可扩展性和高质量重建。我们首先采用数据驱动的显著性检测器来评估不同图像区域的重要性,并提出了基于显著性的块比聚合(BRA)策略来分配采样率。然后建立一个统一的可学习生成矩阵,生成任意CS比的有序结构的采样矩阵。CASNet采用显著性信息引导下的优化型恢复子网和防止阻塞伪像的多块训练方案,用一个模型对不同采样率下采样的图像块进行联合重构。为了加快训练收敛速度和提高网络鲁棒性,提出了一种基于奇异值分解的初始化方案和随机变换增强(RTE)策略,这两种方案在不引入额外参数的情况下具有可扩展性。所有的CASNet组件都可以端到端地组合和学习。我们进一步为评估和实际部署提供了一个四阶段的实施。实验表明,CASNet在很大程度上优于其他CS网络,验证了其组件和策略之间的协作和相互支持。代码可在https://github.com/Guaishou74851/CASNet上获得。
{"title":"Content-Aware Scalable Deep Compressed Sensing","authors":"Bin Chen, Jian Zhang","doi":"10.48550/arXiv.2207.09313","DOIUrl":"https://doi.org/10.48550/arXiv.2207.09313","url":null,"abstract":"To more efficiently address image compressed sensing (CS) problems, we present a novel content-aware scalable network dubbed CASNet which collectively achieves adaptive sampling rate allocation, fine granular scalability and high-quality reconstruction. We first adopt a data-driven saliency detector to evaluate the importance of different image regions and propose a saliency-based block ratio aggregation (BRA) strategy for sampling rate allocation. A unified learnable generating matrix is then developed to produce sampling matrix of any CS ratio with an ordered structure. Being equipped with the optimization-inspired recovery subnet guided by saliency information and a multi-block training scheme preventing blocking artifacts, CASNet jointly reconstructs the image blocks sampled at various sampling rates with one single model. To accelerate training convergence and improve network robustness, we propose an SVD-based initialization scheme and a random transformation enhancement (RTE) strategy, which are extensible without introducing extra parameters. All the CASNet components can be combined and learned end-to-end. We further provide a four-stage implementation for evaluation and practical deployments. Experiments demonstrate that CASNet outperforms other CS networks by a large margin, validating the collaboration and mutual supports among its components and strategies. Codes are available at https://github.com/Guaishou74851/CASNet.","PeriodicalId":13217,"journal":{"name":"IEEE Transactions on Image Processing","volume":null,"pages":null},"PeriodicalIF":10.6,"publicationDate":"2022-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45018690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Unsupervised High-Resolution Portrait Gaze Correction and Animation 无监督的高分辨率肖像凝视校正和动画
IF 10.6 1区 计算机科学 Q1 Computer Science Pub Date : 2022-07-01 DOI: 10.48550/arXiv.2207.00256
Jichao Zhang, Jingjing Chen, Hao Tang, E. Sangineto, Peng Wu, Yan Yan, N. Sebe, Wei Wang
This paper proposes a gaze correction and animation method for high-resolution, unconstrained portrait images, which can be trained without the gaze angle and the head pose annotations. Common gaze-correction methods usually require annotating training data with precise gaze, and head pose information. Solving this problem using an unsupervised method remains an open problem, especially for high-resolution face images in the wild, which are not easy to annotate with gaze and head pose labels. To address this issue, we first create two new portrait datasets: CelebGaze ( $256 times 256$ ) and high-resolution CelebHQGaze ( $512 times 512$ ). Second, we formulate the gaze correction task as an image inpainting problem, addressed using a Gaze Correction Module (GCM) and a Gaze Animation Module (GAM). Moreover, we propose an unsupervised training strategy, i.e., Synthesis-As-Training, to learn the correlation between the eye region features and the gaze angle. As a result, we can use the learned latent space for gaze animation with semantic interpolation in this space. Moreover, to alleviate both the memory and the computational costs in the training and the inference stage, we propose a Coarse-to-Fine Module (CFM) integrated with GCM and GAM. Extensive experiments validate the effectiveness of our method for both the gaze correction and the gaze animation tasks in both low and high-resolution face datasets in the wild and demonstrate the superiority of our method with respect to the state of the art.
本文提出了一种针对高分辨率、无约束人像图像的注视校正和动画方法,该方法可以在没有注视角度和头姿注释的情况下进行训练。常见的注视校正方法通常需要用精确的注视和头姿信息注释训练数据。使用无监督方法解决这个问题仍然是一个开放的问题,特别是对于野外的高分辨率人脸图像,这些图像不容易用凝视和头部姿势标签进行注释。为了解决这个问题,我们首先创建两个新的肖像数据集:CelebHQGaze ($256 times 256$)和高分辨率的CelebHQGaze ($512 times 512$)。其次,我们将凝视校正任务制定为图像绘制问题,使用凝视校正模块(GCM)和凝视动画模块(GAM)来解决。此外,我们提出了一种无监督训练策略,即合成-训练,以学习眼睛区域特征与凝视角度之间的相关性。因此,我们可以将学习到的潜在空间用于注视动画,并在该空间中进行语义插值。此外,为了减轻训练和推理阶段的内存和计算成本,我们提出了一种将GCM和GAM集成在一起的粗到精模块(CFM)。大量的实验验证了我们的方法在低分辨率和高分辨率面部数据集上的凝视校正和凝视动画任务的有效性,并证明了我们的方法相对于目前的技术水平的优越性。
{"title":"Unsupervised High-Resolution Portrait Gaze Correction and Animation","authors":"Jichao Zhang, Jingjing Chen, Hao Tang, E. Sangineto, Peng Wu, Yan Yan, N. Sebe, Wei Wang","doi":"10.48550/arXiv.2207.00256","DOIUrl":"https://doi.org/10.48550/arXiv.2207.00256","url":null,"abstract":"This paper proposes a gaze correction and animation method for high-resolution, unconstrained portrait images, which can be trained without the gaze angle and the head pose annotations. Common gaze-correction methods usually require annotating training data with precise gaze, and head pose information. Solving this problem using an unsupervised method remains an open problem, especially for high-resolution face images in the wild, which are not easy to annotate with gaze and head pose labels. To address this issue, we first create two new portrait datasets: CelebGaze ( $256 times 256$ ) and high-resolution CelebHQGaze ( $512 times 512$ ). Second, we formulate the gaze correction task as an image inpainting problem, addressed using a Gaze Correction Module (GCM) and a Gaze Animation Module (GAM). Moreover, we propose an unsupervised training strategy, i.e., Synthesis-As-Training, to learn the correlation between the eye region features and the gaze angle. As a result, we can use the learned latent space for gaze animation with semantic interpolation in this space. Moreover, to alleviate both the memory and the computational costs in the training and the inference stage, we propose a Coarse-to-Fine Module (CFM) integrated with GCM and GAM. Extensive experiments validate the effectiveness of our method for both the gaze correction and the gaze animation tasks in both low and high-resolution face datasets in the wild and demonstrate the superiority of our method with respect to the state of the art.","PeriodicalId":13217,"journal":{"name":"IEEE Transactions on Image Processing","volume":null,"pages":null},"PeriodicalIF":10.6,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47912929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Motion Feature Aggregation for Video-Based Person Re-Identification 基于视频的人物再识别运动特征聚合
IF 10.6 1区 计算机科学 Q1 Computer Science Pub Date : 2022-05-27 DOI: 10.1109/TIP.2022.3175593
Xinqian Gu, Hong Chang, Bingpeng Ma, S. Shan
Most video-based person re-identification (re-id) methods only focus on appearance features but neglect motion features. In fact, motion features can help to distinguish the target persons that are hard to be identified only by appearance features. However, most existing temporal information modeling methods cannot extract motion features effectively or efficiently for v ideo-based re-id. In this paper, we propose a more efficient Motion Feature Aggregation (MFA) method to model and aggregate motion information in the feature map level for video-based re-id. The proposed MFA consists of (i) a coarse-grained motion learning module, which extracts coarse-grained motion features based on the position changes of body parts over time, and (ii) a fine-grained motion learning module, which extracts fine-grained motion features based on the appearance changes of body parts over time. These two modules can model motion information from different granularities and are complementary to each other. It is easy to combine the proposed method with existing network architectures for end-to-end training. Extensive experiments on four widely used datasets demonstrate that the motion features extracted by MFA are crucial complements to appearance features for video-based re-id, especially for the scenario with large appearance changes. Besides, the results on LS-VID, the current largest publicly available video-based re-id dataset, surpass the state-of-the-art methods by a large margin. The code is available at: https://github.com/guxinqian/Simple-ReID.
大多数基于视频的人物再识别(re-id)方法只关注外表特征,而忽略了动作特征。事实上,运动特征可以帮助识别仅凭外表特征难以识别的目标人物。然而,现有的时间信息建模方法大多不能有效地提取基于视频的re-id的运动特征。在本文中,我们提出了一种更有效的运动特征聚合(MFA)方法来对基于视频的re-id进行特征映射级的运动信息建模和聚合。所提出的MFA包括:(i)粗粒度运动学习模块,该模块根据身体部位随时间的位置变化提取粗粒度运动特征;(ii)细粒度运动学习模块,该模块根据身体部位随时间的外观变化提取细粒度运动特征。这两个模块可以对不同粒度的运动信息进行建模,并且是互补的。该方法可以很容易地与现有的网络体系结构结合起来进行端到端训练。在四个广泛使用的数据集上进行的大量实验表明,MFA提取的运动特征是基于视频的re-id中外观特征的重要补充,特别是对于外观变化较大的场景。此外,LS-VID是目前最大的公开视频重新识别数据集,其结果远远超过了最先进的方法。代码可从https://github.com/guxinqian/Simple-ReID获得。
{"title":"Motion Feature Aggregation for Video-Based Person Re-Identification","authors":"Xinqian Gu, Hong Chang, Bingpeng Ma, S. Shan","doi":"10.1109/TIP.2022.3175593","DOIUrl":"https://doi.org/10.1109/TIP.2022.3175593","url":null,"abstract":"Most video-based person re-identification (re-id) methods only focus on appearance features but neglect motion features. In fact, motion features can help to distinguish the target persons that are hard to be identified only by appearance features. However, most existing temporal information modeling methods cannot extract motion features effectively or efficiently for v ideo-based re-id. In this paper, we propose a more efficient Motion Feature Aggregation (MFA) method to model and aggregate motion information in the feature map level for video-based re-id. The proposed MFA consists of (i) a coarse-grained motion learning module, which extracts coarse-grained motion features based on the position changes of body parts over time, and (ii) a fine-grained motion learning module, which extracts fine-grained motion features based on the appearance changes of body parts over time. These two modules can model motion information from different granularities and are complementary to each other. It is easy to combine the proposed method with existing network architectures for end-to-end training. Extensive experiments on four widely used datasets demonstrate that the motion features extracted by MFA are crucial complements to appearance features for video-based re-id, especially for the scenario with large appearance changes. Besides, the results on LS-VID, the current largest publicly available video-based re-id dataset, surpass the state-of-the-art methods by a large margin. The code is available at: https://github.com/guxinqian/Simple-ReID.","PeriodicalId":13217,"journal":{"name":"IEEE Transactions on Image Processing","volume":null,"pages":null},"PeriodicalIF":10.6,"publicationDate":"2022-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62591748","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Data Augmentation Using Bitplane Information Recombination Model 基于位面信息重组模型的数据增强
IF 10.6 1区 计算机科学 Q1 Computer Science Pub Date : 2022-05-20 DOI: 10.1109/TIP.2022.3175429
Huan Zhang, Zhiyi Xu, Xiaolin Han, Weidong Sun
The performance of deep learning heavily depend on the quantity and quality of training data. But in many fields, well-annotated data are so difficult to collect, which makes the data scale hard to meet the needs of network training. To deal with this issue, a novel data augmentation method using the bitplane information recombination model (termed as BIRD) is proposed in this paper. Considering each bitplane can provide different structural information at different levels of detail, this method divides the internal hierarchical structure of a given image into different bitplanes, and reorganizes them by bitplane extraction, bitplane selection and bitplane recombination, to form an augmented data with different image details. This method can generate up to 62 times of the training data, for a given 8-bits image. In addition, this generalized method is model free, parameter free and easy to combine with various neural networks, without changing the original annotated data. Taking the task of target detection for remotely sensed images and classification for natural images as an example, experimental results on DOTA dataset and CIFAR-100 dataset demonstrated that, our proposed method is not only effective for data augmentation, but also helpful to improve the accuracy of target detection and image classification.
深度学习的性能在很大程度上取决于训练数据的数量和质量。但是在很多领域中,很好的标注数据很难被收集到,这使得数据规模难以满足网络训练的需要。针对这一问题,本文提出了一种基于位面信息重组模型(BIRD)的数据增强方法。该方法考虑到每个位平面在不同的细节层次上可以提供不同的结构信息,将给定图像的内部层次结构划分为不同的位平面,并通过位平面提取、位平面选择和位平面重组对其进行重组,形成具有不同图像细节的增强数据。对于给定的8位图像,该方法可以生成多达62倍的训练数据。此外,该方法具有模型自由、参数自由、易于与各种神经网络结合、不改变原始标注数据等特点。以遥感图像的目标检测和自然图像的分类任务为例,在DOTA数据集和CIFAR-100数据集上的实验结果表明,我们提出的方法不仅对数据增强有效,而且有助于提高目标检测和图像分类的精度。
{"title":"Data Augmentation Using Bitplane Information Recombination Model","authors":"Huan Zhang, Zhiyi Xu, Xiaolin Han, Weidong Sun","doi":"10.1109/TIP.2022.3175429","DOIUrl":"https://doi.org/10.1109/TIP.2022.3175429","url":null,"abstract":"The performance of deep learning heavily depend on the quantity and quality of training data. But in many fields, well-annotated data are so difficult to collect, which makes the data scale hard to meet the needs of network training. To deal with this issue, a novel data augmentation method using the bitplane information recombination model (termed as BIRD) is proposed in this paper. Considering each bitplane can provide different structural information at different levels of detail, this method divides the internal hierarchical structure of a given image into different bitplanes, and reorganizes them by bitplane extraction, bitplane selection and bitplane recombination, to form an augmented data with different image details. This method can generate up to 62 times of the training data, for a given 8-bits image. In addition, this generalized method is model free, parameter free and easy to combine with various neural networks, without changing the original annotated data. Taking the task of target detection for remotely sensed images and classification for natural images as an example, experimental results on DOTA dataset and CIFAR-100 dataset demonstrated that, our proposed method is not only effective for data augmentation, but also helpful to improve the accuracy of target detection and image classification.","PeriodicalId":13217,"journal":{"name":"IEEE Transactions on Image Processing","volume":null,"pages":null},"PeriodicalIF":10.6,"publicationDate":"2022-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62591682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Real Image Denoising With a Locally-Adaptive Bitonic Filter 基于局部自适应Bitonic滤波器的实景图像去噪
IF 10.6 1区 计算机科学 Q1 Computer Science Pub Date : 2021-09-08 DOI: 10.17863/CAM.75234
Graham M. Treece
Image noise removal is a common problem with many proposed solutions. The current standard is set by learning-based approaches, however these are not appropriate in all scenarios, perhaps due to lack of training data or the need for predictability in novel circumstances. The bitonic filter is a non-learning-based filter for removing noise from signals, with a mathematical morphology (ranking) framework in which the signal is postulated to be locally bitonic (having only one minimum or maximum) over some domain of finite extent. A novel version of this filter is developed in this paper, with a domain that is locally-adaptive to the signal, and other adjustments to allow application to real image sensor noise. These lead to significant improvements in noise reduction performance at no cost to processing times. The new bitonic filter performs better than the block-matching 3D filter for high levels of additive white Gaussian noise. It also surpasses this and other more recent non-learning-based filters for two public data sets containing real image noise at various levels. This is despite an additional adjustment to the block-matching filter, which leads to significantly better performance than has previously been cited on these data sets. The new bitonic filter has a signal-to-noise ratio 2.4dB lower than the best learning-based techniques when they are optimally trained. However, the performance gap is closed completely when these techniques are trained on data sets not directly related to the benchmark data. This demonstrates what can be achieved with a predictable, explainable, entirely local technique, which makes no assumptions of repeating patterns either within an image or across images, and hence creates residual images which are well behaved even in very high noise. Since the filter does not require training, it can still be used in situations where training is either difficult or inappropriate.
图像噪声去除是一个常见的问题,有许多解决方案。目前的标准是由基于学习的方法设定的,然而,这些方法并不适用于所有场景,这可能是由于缺乏训练数据或需要在新环境中具有可预测性。双音滤波器是一种非基于学习的滤波器,用于从信号中去除噪声,具有数学形态学(排序)框架,其中信号被假设为局部双音(只有一个最小值或最大值)在某个有限范围的域中。本文开发了该滤波器的新版本,具有对信号局部自适应的域,并进行了其他调整以允许应用于真实图像传感器噪声。在不增加处理时间的情况下,显著提高了降噪性能。对于高水平的加性高斯白噪声,新的双onic滤波器比块匹配3D滤波器性能更好。对于包含不同级别真实图像噪声的两个公共数据集,它也超过了这个和其他最近的非基于学习的过滤器。尽管对块匹配过滤器进行了额外的调整,这导致比以前在这些数据集上引用的性能要好得多。经过最佳训练后,新的双onic滤波器的信噪比比最好的基于学习的技术低2.4dB。然而,当这些技术在与基准数据不直接相关的数据集上进行训练时,性能差距就完全消除了。这证明了通过可预测、可解释、完全局部化的技术可以实现的目标,该技术不假设图像内或图像间的重复模式,因此即使在非常高的噪声中也可以创建表现良好的残余图像。由于过滤器不需要训练,它仍然可以在训练困难或不合适的情况下使用。
{"title":"Real Image Denoising With a Locally-Adaptive Bitonic Filter","authors":"Graham M. Treece","doi":"10.17863/CAM.75234","DOIUrl":"https://doi.org/10.17863/CAM.75234","url":null,"abstract":"Image noise removal is a common problem with many proposed solutions. The current standard is set by learning-based approaches, however these are not appropriate in all scenarios, perhaps due to lack of training data or the need for predictability in novel circumstances. The bitonic filter is a non-learning-based filter for removing noise from signals, with a mathematical morphology (ranking) framework in which the signal is postulated to be locally bitonic (having only one minimum or maximum) over some domain of finite extent. A novel version of this filter is developed in this paper, with a domain that is locally-adaptive to the signal, and other adjustments to allow application to real image sensor noise. These lead to significant improvements in noise reduction performance at no cost to processing times. The new bitonic filter performs better than the block-matching 3D filter for high levels of additive white Gaussian noise. It also surpasses this and other more recent non-learning-based filters for two public data sets containing real image noise at various levels. This is despite an additional adjustment to the block-matching filter, which leads to significantly better performance than has previously been cited on these data sets. The new bitonic filter has a signal-to-noise ratio 2.4dB lower than the best learning-based techniques when they are optimally trained. However, the performance gap is closed completely when these techniques are trained on data sets not directly related to the benchmark data. This demonstrates what can be achieved with a predictable, explainable, entirely local technique, which makes no assumptions of repeating patterns either within an image or across images, and hence creates residual images which are well behaved even in very high noise. Since the filter does not require training, it can still be used in situations where training is either difficult or inappropriate.","PeriodicalId":13217,"journal":{"name":"IEEE Transactions on Image Processing","volume":null,"pages":null},"PeriodicalIF":10.6,"publicationDate":"2021-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47479607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Fractional Super-Resolution of Voxelized Point Clouds Voxeized点云的分数超分辨率
IF 10.6 1区 计算机科学 Q1 Computer Science Pub Date : 2021-01-15 DOI: 10.36227/techrxiv.15032052.v1
Tomás M. Borges, Diogo C. Garcia, R. Queiroz
We present a method to super-resolve voxelized point clouds downsampled by a fractional factor, using lookup-tables (LUT) constructed from self-similarities from their own downsampled neighborhoods. The proposed method was developed to densify and to increase the precision of voxelized point clouds, and can be used, for example, as improve compression and rendering. We super-resolve the geometry, but we also interpolate texture by averaging colors from adjacent neighbors, for completeness. Our technique, as we understand, is the first specifically developed for intra-frame super-resolution of voxelized point clouds, for arbitrary resampling scale factors. We present extensive test results over different point clouds, showing the effectiveness of the proposed approach against baseline methods.
我们提出了一种超分辨率由分数因子下采样的体素化点云的方法,使用从其自身下采样邻域的自相似性构建的查找表(LUT)。所提出的方法是为了加密和提高体素化点云的精度而开发的,例如,可以用于改进压缩和渲染。我们超级解析几何体,但为了完整性,我们也通过对相邻邻居的颜色进行平均来插值纹理。据我们所知,我们的技术是第一个专门为体素化点云的帧内超分辨率开发的技术,用于任意的重采样比例因子。我们在不同的点云上给出了大量的测试结果,显示了所提出的方法相对于基线方法的有效性。
{"title":"Fractional Super-Resolution of Voxelized Point Clouds","authors":"Tomás M. Borges, Diogo C. Garcia, R. Queiroz","doi":"10.36227/techrxiv.15032052.v1","DOIUrl":"https://doi.org/10.36227/techrxiv.15032052.v1","url":null,"abstract":"We present a method to super-resolve voxelized point clouds downsampled by a fractional factor, using lookup-tables (LUT) constructed from self-similarities from their own downsampled neighborhoods. The proposed method was developed to densify and to increase the precision of voxelized point clouds, and can be used, for example, as improve compression and rendering. We super-resolve the geometry, but we also interpolate texture by averaging colors from adjacent neighbors, for completeness. Our technique, as we understand, is the first specifically developed for intra-frame super-resolution of voxelized point clouds, for arbitrary resampling scale factors. We present extensive test results over different point clouds, showing the effectiveness of the proposed approach against baseline methods.","PeriodicalId":13217,"journal":{"name":"IEEE Transactions on Image Processing","volume":null,"pages":null},"PeriodicalIF":10.6,"publicationDate":"2021-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46834443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
期刊
IEEE Transactions on Image Processing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1