Pub Date : 2023-04-14DOI: 10.1109/TIP.2023.3266163
Lin Hong, Xin Wang, Gan Zhang, Ming Zhao
Underwater salient object detection (USOD) attracts increasing interest for its promising performance in various underwater visual tasks. However, USOD research is still in its early stages due to the lack of large-scale datasets within which salient objects are well-defined and pixel-wise annotated. To address this issue, this paper introduces a new dataset named USOD10K. It consists of 10,255 underwater images, covering 70 categories of salient objects in 12 different underwater scenes. In addition, salient object boundaries and depth maps of all images are provided in this dataset. The USOD10K is the first large-scale dataset in the USOD community, making a significant leap in diversity, complexity, and scalability. Secondly, a simple but strong baseline termed TC-USOD is designed for the USOD10K. The TC-USOD adopts a hybrid architecture based on an encoder-decoder design that leverages transformer and convolution as the basic computational building block of the encoder and decoder, respectively. Thirdly, we make a comprehensive summarization of 35 cutting-edge SOD/USOD methods and benchmark them over the existing USOD dataset and the USOD10K. The results show that our TC-USOD obtained superior performance on all datasets tested. Finally, several other use cases of the USOD10K are discussed, and future directions of USOD research are pointed out. This work will promote the development of the USOD research and facilitate further research on underwater visual tasks and visually-guided underwater robots. To pave the road in this research field, all the dataset, code, and benchmark results are publicly available: https://github.com/LinHong-HIT/USOD10K.
{"title":"USOD10K: A New Benchmark Dataset for Underwater Salient Object Detection.","authors":"Lin Hong, Xin Wang, Gan Zhang, Ming Zhao","doi":"10.1109/TIP.2023.3266163","DOIUrl":"10.1109/TIP.2023.3266163","url":null,"abstract":"<p><p>Underwater salient object detection (USOD) attracts increasing interest for its promising performance in various underwater visual tasks. However, USOD research is still in its early stages due to the lack of large-scale datasets within which salient objects are well-defined and pixel-wise annotated. To address this issue, this paper introduces a new dataset named USOD10K. It consists of 10,255 underwater images, covering 70 categories of salient objects in 12 different underwater scenes. In addition, salient object boundaries and depth maps of all images are provided in this dataset. The USOD10K is the first large-scale dataset in the USOD community, making a significant leap in diversity, complexity, and scalability. Secondly, a simple but strong baseline termed TC-USOD is designed for the USOD10K. The TC-USOD adopts a hybrid architecture based on an encoder-decoder design that leverages transformer and convolution as the basic computational building block of the encoder and decoder, respectively. Thirdly, we make a comprehensive summarization of 35 cutting-edge SOD/USOD methods and benchmark them over the existing USOD dataset and the USOD10K. The results show that our TC-USOD obtained superior performance on all datasets tested. Finally, several other use cases of the USOD10K are discussed, and future directions of USOD research are pointed out. This work will promote the development of the USOD research and facilitate further research on underwater visual tasks and visually-guided underwater robots. To pave the road in this research field, all the dataset, code, and benchmark results are publicly available: https://github.com/LinHong-HIT/USOD10K.</p>","PeriodicalId":13217,"journal":{"name":"IEEE Transactions on Image Processing","volume":"PP ","pages":""},"PeriodicalIF":10.6,"publicationDate":"2023-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9781338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-03-28DOI: 10.1109/TIP.2023.3251737
Xiyang Luo, Yinxiao Li, Huiwen Chang, Ce Liu, Peyman Milanfar, Feng Yang
Video watermarking embeds a message into a cover video in an imperceptible manner, which can be retrieved even if the video undergoes certain modifications or distortions. Traditional watermarking methods are often manually designed for particular types of distortions and thus cannot simultaneously handle a broad spectrum of distortions. To this end, we propose a robust deep learning-based solution for video watermarking that is end-to-end trainable. Our model consists of a novel multiscale design where the watermarks are distributed across multiple spatial-temporal scales. Extensive evaluations on a wide variety of distortions show that our method outperforms traditional video watermarking methods as well as deep image watermarking models by a large margin. We further demonstrate the practicality of our method on a realistic video-editing application.
{"title":"DVMark: A Deep Multiscale Framework for Video Watermarking.","authors":"Xiyang Luo, Yinxiao Li, Huiwen Chang, Ce Liu, Peyman Milanfar, Feng Yang","doi":"10.1109/TIP.2023.3251737","DOIUrl":"10.1109/TIP.2023.3251737","url":null,"abstract":"<p><p>Video watermarking embeds a message into a cover video in an imperceptible manner, which can be retrieved even if the video undergoes certain modifications or distortions. Traditional watermarking methods are often manually designed for particular types of distortions and thus cannot simultaneously handle a broad spectrum of distortions. To this end, we propose a robust deep learning-based solution for video watermarking that is end-to-end trainable. Our model consists of a novel multiscale design where the watermarks are distributed across multiple spatial-temporal scales. Extensive evaluations on a wide variety of distortions show that our method outperforms traditional video watermarking methods as well as deep image watermarking models by a large margin. We further demonstrate the practicality of our method on a realistic video-editing application.</p>","PeriodicalId":13217,"journal":{"name":"IEEE Transactions on Image Processing","volume":"PP ","pages":""},"PeriodicalIF":10.6,"publicationDate":"2023-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9266354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-16DOI: 10.1109/TAP.2022.3218759
Zhiqiang Yuan, Jianhua Zhang, Yilin Ji, G. Pedersen, W. Fan
Existing deraining methods focus mainly on a single input image. However, with just a single input image, it is extremely difficult to accurately detect and remove rain streaks, in order to restore a rain-free image. In contrast, a light field image (LFI) embeds abundant 3D structure and texture information of the target scene by recording the direction and position of each incident ray via a plenoptic camera. LFIs are becoming popular in the computer vision and graphics communities. However, making full use of the abundant information available from LFIs, such as 2D array of sub-views and the disparity map of each sub-view, for effective rain removal is still a challenging problem. In this paper, we propose a novel method, 4D-MGP-SRRNet, for rain streak removal from LFIs. Our method takes as input all sub-views of a rainy LFI. To make full use of the LFI, it adopts 4D convolutional layers to simultaneously process all sub-views of the LFI. In the pipeline, the rain detection network, MGPDNet, with a novel Multi-scale Self-guided Gaussian Process (MSGP) module is proposed to detect high-resolution rain streaks from all sub-views of the input LFI at multi-scales. Semi-supervised learning is introduced for MSGP to accurately detect rain streaks by training on both virtual-world rainy LFIs and real-world rainy LFIs at multi-scales via computing pseudo ground truths for real-world rain streaks. We then feed all sub-views subtracting the predicted rain streaks into a 4D convolution-based Depth Estimation Residual Network (DERNet) to estimate the depth maps, which are later converted into fog maps. Finally, all sub-views concatenated with the corresponding rain streaks and fog maps are fed into a powerful rainy LFI restoring model based on the adversarial recurrent neural network to progressively eliminate rain streaks and recover the rain-free LFI. Extensive quantitative and qualitative evaluations conducted on both synthetic LFIs and real-world LFIs demonstrate the effectiveness of our proposed method.
{"title":"Rain Removal From Light Field Images With 4D Convolution and Multi-Scale Gaussian Process","authors":"Zhiqiang Yuan, Jianhua Zhang, Yilin Ji, G. Pedersen, W. Fan","doi":"10.1109/TAP.2022.3218759","DOIUrl":"https://doi.org/10.1109/TAP.2022.3218759","url":null,"abstract":"Existing deraining methods focus mainly on a single input image. However, with just a single input image, it is extremely difficult to accurately detect and remove rain streaks, in order to restore a rain-free image. In contrast, a light field image (LFI) embeds abundant 3D structure and texture information of the target scene by recording the direction and position of each incident ray via a plenoptic camera. LFIs are becoming popular in the computer vision and graphics communities. However, making full use of the abundant information available from LFIs, such as 2D array of sub-views and the disparity map of each sub-view, for effective rain removal is still a challenging problem. In this paper, we propose a novel method, 4D-MGP-SRRNet, for rain streak removal from LFIs. Our method takes as input all sub-views of a rainy LFI. To make full use of the LFI, it adopts 4D convolutional layers to simultaneously process all sub-views of the LFI. In the pipeline, the rain detection network, MGPDNet, with a novel Multi-scale Self-guided Gaussian Process (MSGP) module is proposed to detect high-resolution rain streaks from all sub-views of the input LFI at multi-scales. Semi-supervised learning is introduced for MSGP to accurately detect rain streaks by training on both virtual-world rainy LFIs and real-world rainy LFIs at multi-scales via computing pseudo ground truths for real-world rain streaks. We then feed all sub-views subtracting the predicted rain streaks into a 4D convolution-based Depth Estimation Residual Network (DERNet) to estimate the depth maps, which are later converted into fog maps. Finally, all sub-views concatenated with the corresponding rain streaks and fog maps are fed into a powerful rainy LFI restoring model based on the adversarial recurrent neural network to progressively eliminate rain streaks and recover the rain-free LFI. Extensive quantitative and qualitative evaluations conducted on both synthetic LFIs and real-world LFIs demonstrate the effectiveness of our proposed method.","PeriodicalId":13217,"journal":{"name":"IEEE Transactions on Image Processing","volume":"32 1","pages":"921-936"},"PeriodicalIF":10.6,"publicationDate":"2022-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48830864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-21DOI: 10.48550/arXiv.2207.10582
Zuo-Liang Zhu, Z. Li, Ruimao Zhang, Chunle Guo, Ming-Ming Cheng
Lighting is a determining factor in photography that affects the style, expression of emotion, and even quality of images. Creating or finding satisfying lighting conditions, in reality, is laborious and time-consuming, so it is of great value to develop a technology to manipulate illumination in an image as post-processing. Although previous works have explored techniques based on the physical viewpoint for relighting images, extensive supervisions and prior knowledge are necessary to generate reasonable images, restricting the generalization ability of these works. In contrast, we take the viewpoint of image-to-image translation and implicitly merge ideas of the conventional physical viewpoint. In this paper, we present an Illumination-Aware Network (IAN) which follows the guidance from hierarchical sampling to progressively relight a scene from a single image with high efficiency. In addition, an Illumination-Aware Residual Block (IARB) is designed to approximate the physical rendering process and to extract precise descriptors of light sources for further manipulations. We also introduce a depth-guided geometry encoder for acquiring valuable geometry- and structure-related representations once the depth information is available. Experimental results show that our proposed method produces better quantitative and qualitative relighting results than previous state-of-the-art methods. The code and models are publicly available on https://github.com/NK-CS-ZZL/IAN.
{"title":"Designing an Illumination-Aware Network for Deep Image Relighting","authors":"Zuo-Liang Zhu, Z. Li, Ruimao Zhang, Chunle Guo, Ming-Ming Cheng","doi":"10.48550/arXiv.2207.10582","DOIUrl":"https://doi.org/10.48550/arXiv.2207.10582","url":null,"abstract":"Lighting is a determining factor in photography that affects the style, expression of emotion, and even quality of images. Creating or finding satisfying lighting conditions, in reality, is laborious and time-consuming, so it is of great value to develop a technology to manipulate illumination in an image as post-processing. Although previous works have explored techniques based on the physical viewpoint for relighting images, extensive supervisions and prior knowledge are necessary to generate reasonable images, restricting the generalization ability of these works. In contrast, we take the viewpoint of image-to-image translation and implicitly merge ideas of the conventional physical viewpoint. In this paper, we present an Illumination-Aware Network (IAN) which follows the guidance from hierarchical sampling to progressively relight a scene from a single image with high efficiency. In addition, an Illumination-Aware Residual Block (IARB) is designed to approximate the physical rendering process and to extract precise descriptors of light sources for further manipulations. We also introduce a depth-guided geometry encoder for acquiring valuable geometry- and structure-related representations once the depth information is available. Experimental results show that our proposed method produces better quantitative and qualitative relighting results than previous state-of-the-art methods. The code and models are publicly available on https://github.com/NK-CS-ZZL/IAN.","PeriodicalId":13217,"journal":{"name":"IEEE Transactions on Image Processing","volume":"31 1","pages":"5396-5411"},"PeriodicalIF":10.6,"publicationDate":"2022-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49347222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-19DOI: 10.48550/arXiv.2207.09313
Bin Chen, Jian Zhang
To more efficiently address image compressed sensing (CS) problems, we present a novel content-aware scalable network dubbed CASNet which collectively achieves adaptive sampling rate allocation, fine granular scalability and high-quality reconstruction. We first adopt a data-driven saliency detector to evaluate the importance of different image regions and propose a saliency-based block ratio aggregation (BRA) strategy for sampling rate allocation. A unified learnable generating matrix is then developed to produce sampling matrix of any CS ratio with an ordered structure. Being equipped with the optimization-inspired recovery subnet guided by saliency information and a multi-block training scheme preventing blocking artifacts, CASNet jointly reconstructs the image blocks sampled at various sampling rates with one single model. To accelerate training convergence and improve network robustness, we propose an SVD-based initialization scheme and a random transformation enhancement (RTE) strategy, which are extensible without introducing extra parameters. All the CASNet components can be combined and learned end-to-end. We further provide a four-stage implementation for evaluation and practical deployments. Experiments demonstrate that CASNet outperforms other CS networks by a large margin, validating the collaboration and mutual supports among its components and strategies. Codes are available at https://github.com/Guaishou74851/CASNet.
{"title":"Content-Aware Scalable Deep Compressed Sensing","authors":"Bin Chen, Jian Zhang","doi":"10.48550/arXiv.2207.09313","DOIUrl":"https://doi.org/10.48550/arXiv.2207.09313","url":null,"abstract":"To more efficiently address image compressed sensing (CS) problems, we present a novel content-aware scalable network dubbed CASNet which collectively achieves adaptive sampling rate allocation, fine granular scalability and high-quality reconstruction. We first adopt a data-driven saliency detector to evaluate the importance of different image regions and propose a saliency-based block ratio aggregation (BRA) strategy for sampling rate allocation. A unified learnable generating matrix is then developed to produce sampling matrix of any CS ratio with an ordered structure. Being equipped with the optimization-inspired recovery subnet guided by saliency information and a multi-block training scheme preventing blocking artifacts, CASNet jointly reconstructs the image blocks sampled at various sampling rates with one single model. To accelerate training convergence and improve network robustness, we propose an SVD-based initialization scheme and a random transformation enhancement (RTE) strategy, which are extensible without introducing extra parameters. All the CASNet components can be combined and learned end-to-end. We further provide a four-stage implementation for evaluation and practical deployments. Experiments demonstrate that CASNet outperforms other CS networks by a large margin, validating the collaboration and mutual supports among its components and strategies. Codes are available at https://github.com/Guaishou74851/CASNet.","PeriodicalId":13217,"journal":{"name":"IEEE Transactions on Image Processing","volume":"31 1","pages":"5412-5426"},"PeriodicalIF":10.6,"publicationDate":"2022-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45018690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-01DOI: 10.48550/arXiv.2207.00256
Jichao Zhang, Jingjing Chen, Hao Tang, E. Sangineto, Peng Wu, Yan Yan, N. Sebe, Wei Wang
This paper proposes a gaze correction and animation method for high-resolution, unconstrained portrait images, which can be trained without the gaze angle and the head pose annotations. Common gaze-correction methods usually require annotating training data with precise gaze, and head pose information. Solving this problem using an unsupervised method remains an open problem, especially for high-resolution face images in the wild, which are not easy to annotate with gaze and head pose labels. To address this issue, we first create two new portrait datasets: CelebGaze ( $256 times 256$ ) and high-resolution CelebHQGaze ( $512 times 512$ ). Second, we formulate the gaze correction task as an image inpainting problem, addressed using a Gaze Correction Module (GCM) and a Gaze Animation Module (GAM). Moreover, we propose an unsupervised training strategy, i.e., Synthesis-As-Training, to learn the correlation between the eye region features and the gaze angle. As a result, we can use the learned latent space for gaze animation with semantic interpolation in this space. Moreover, to alleviate both the memory and the computational costs in the training and the inference stage, we propose a Coarse-to-Fine Module (CFM) integrated with GCM and GAM. Extensive experiments validate the effectiveness of our method for both the gaze correction and the gaze animation tasks in both low and high-resolution face datasets in the wild and demonstrate the superiority of our method with respect to the state of the art.
本文提出了一种针对高分辨率、无约束人像图像的注视校正和动画方法,该方法可以在没有注视角度和头姿注释的情况下进行训练。常见的注视校正方法通常需要用精确的注视和头姿信息注释训练数据。使用无监督方法解决这个问题仍然是一个开放的问题,特别是对于野外的高分辨率人脸图像,这些图像不容易用凝视和头部姿势标签进行注释。为了解决这个问题,我们首先创建两个新的肖像数据集:CelebHQGaze ($256 times 256$)和高分辨率的CelebHQGaze ($512 times 512$)。其次,我们将凝视校正任务制定为图像绘制问题,使用凝视校正模块(GCM)和凝视动画模块(GAM)来解决。此外,我们提出了一种无监督训练策略,即合成-训练,以学习眼睛区域特征与凝视角度之间的相关性。因此,我们可以将学习到的潜在空间用于注视动画,并在该空间中进行语义插值。此外,为了减轻训练和推理阶段的内存和计算成本,我们提出了一种将GCM和GAM集成在一起的粗到精模块(CFM)。大量的实验验证了我们的方法在低分辨率和高分辨率面部数据集上的凝视校正和凝视动画任务的有效性,并证明了我们的方法相对于目前的技术水平的优越性。
{"title":"Unsupervised High-Resolution Portrait Gaze Correction and Animation","authors":"Jichao Zhang, Jingjing Chen, Hao Tang, E. Sangineto, Peng Wu, Yan Yan, N. Sebe, Wei Wang","doi":"10.48550/arXiv.2207.00256","DOIUrl":"https://doi.org/10.48550/arXiv.2207.00256","url":null,"abstract":"This paper proposes a gaze correction and animation method for high-resolution, unconstrained portrait images, which can be trained without the gaze angle and the head pose annotations. Common gaze-correction methods usually require annotating training data with precise gaze, and head pose information. Solving this problem using an unsupervised method remains an open problem, especially for high-resolution face images in the wild, which are not easy to annotate with gaze and head pose labels. To address this issue, we first create two new portrait datasets: CelebGaze ( $256 times 256$ ) and high-resolution CelebHQGaze ( $512 times 512$ ). Second, we formulate the gaze correction task as an image inpainting problem, addressed using a Gaze Correction Module (GCM) and a Gaze Animation Module (GAM). Moreover, we propose an unsupervised training strategy, i.e., Synthesis-As-Training, to learn the correlation between the eye region features and the gaze angle. As a result, we can use the learned latent space for gaze animation with semantic interpolation in this space. Moreover, to alleviate both the memory and the computational costs in the training and the inference stage, we propose a Coarse-to-Fine Module (CFM) integrated with GCM and GAM. Extensive experiments validate the effectiveness of our method for both the gaze correction and the gaze animation tasks in both low and high-resolution face datasets in the wild and demonstrate the superiority of our method with respect to the state of the art.","PeriodicalId":13217,"journal":{"name":"IEEE Transactions on Image Processing","volume":"31 1","pages":"5272-5286"},"PeriodicalIF":10.6,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47912929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-05-27DOI: 10.1109/TIP.2022.3175593
Xinqian Gu, Hong Chang, Bingpeng Ma, S. Shan
Most video-based person re-identification (re-id) methods only focus on appearance features but neglect motion features. In fact, motion features can help to distinguish the target persons that are hard to be identified only by appearance features. However, most existing temporal information modeling methods cannot extract motion features effectively or efficiently for v ideo-based re-id. In this paper, we propose a more efficient Motion Feature Aggregation (MFA) method to model and aggregate motion information in the feature map level for video-based re-id. The proposed MFA consists of (i) a coarse-grained motion learning module, which extracts coarse-grained motion features based on the position changes of body parts over time, and (ii) a fine-grained motion learning module, which extracts fine-grained motion features based on the appearance changes of body parts over time. These two modules can model motion information from different granularities and are complementary to each other. It is easy to combine the proposed method with existing network architectures for end-to-end training. Extensive experiments on four widely used datasets demonstrate that the motion features extracted by MFA are crucial complements to appearance features for video-based re-id, especially for the scenario with large appearance changes. Besides, the results on LS-VID, the current largest publicly available video-based re-id dataset, surpass the state-of-the-art methods by a large margin. The code is available at: https://github.com/guxinqian/Simple-ReID.
{"title":"Motion Feature Aggregation for Video-Based Person Re-Identification","authors":"Xinqian Gu, Hong Chang, Bingpeng Ma, S. Shan","doi":"10.1109/TIP.2022.3175593","DOIUrl":"https://doi.org/10.1109/TIP.2022.3175593","url":null,"abstract":"Most video-based person re-identification (re-id) methods only focus on appearance features but neglect motion features. In fact, motion features can help to distinguish the target persons that are hard to be identified only by appearance features. However, most existing temporal information modeling methods cannot extract motion features effectively or efficiently for v ideo-based re-id. In this paper, we propose a more efficient Motion Feature Aggregation (MFA) method to model and aggregate motion information in the feature map level for video-based re-id. The proposed MFA consists of (i) a coarse-grained motion learning module, which extracts coarse-grained motion features based on the position changes of body parts over time, and (ii) a fine-grained motion learning module, which extracts fine-grained motion features based on the appearance changes of body parts over time. These two modules can model motion information from different granularities and are complementary to each other. It is easy to combine the proposed method with existing network architectures for end-to-end training. Extensive experiments on four widely used datasets demonstrate that the motion features extracted by MFA are crucial complements to appearance features for video-based re-id, especially for the scenario with large appearance changes. Besides, the results on LS-VID, the current largest publicly available video-based re-id dataset, surpass the state-of-the-art methods by a large margin. The code is available at: https://github.com/guxinqian/Simple-ReID.","PeriodicalId":13217,"journal":{"name":"IEEE Transactions on Image Processing","volume":"31 1","pages":"3908-3919"},"PeriodicalIF":10.6,"publicationDate":"2022-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62591748","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-05-20DOI: 10.1109/TIP.2022.3175429
Huan Zhang, Zhiyi Xu, Xiaolin Han, Weidong Sun
The performance of deep learning heavily depend on the quantity and quality of training data. But in many fields, well-annotated data are so difficult to collect, which makes the data scale hard to meet the needs of network training. To deal with this issue, a novel data augmentation method using the bitplane information recombination model (termed as BIRD) is proposed in this paper. Considering each bitplane can provide different structural information at different levels of detail, this method divides the internal hierarchical structure of a given image into different bitplanes, and reorganizes them by bitplane extraction, bitplane selection and bitplane recombination, to form an augmented data with different image details. This method can generate up to 62 times of the training data, for a given 8-bits image. In addition, this generalized method is model free, parameter free and easy to combine with various neural networks, without changing the original annotated data. Taking the task of target detection for remotely sensed images and classification for natural images as an example, experimental results on DOTA dataset and CIFAR-100 dataset demonstrated that, our proposed method is not only effective for data augmentation, but also helpful to improve the accuracy of target detection and image classification.
{"title":"Data Augmentation Using Bitplane Information Recombination Model","authors":"Huan Zhang, Zhiyi Xu, Xiaolin Han, Weidong Sun","doi":"10.1109/TIP.2022.3175429","DOIUrl":"https://doi.org/10.1109/TIP.2022.3175429","url":null,"abstract":"The performance of deep learning heavily depend on the quantity and quality of training data. But in many fields, well-annotated data are so difficult to collect, which makes the data scale hard to meet the needs of network training. To deal with this issue, a novel data augmentation method using the bitplane information recombination model (termed as BIRD) is proposed in this paper. Considering each bitplane can provide different structural information at different levels of detail, this method divides the internal hierarchical structure of a given image into different bitplanes, and reorganizes them by bitplane extraction, bitplane selection and bitplane recombination, to form an augmented data with different image details. This method can generate up to 62 times of the training data, for a given 8-bits image. In addition, this generalized method is model free, parameter free and easy to combine with various neural networks, without changing the original annotated data. Taking the task of target detection for remotely sensed images and classification for natural images as an example, experimental results on DOTA dataset and CIFAR-100 dataset demonstrated that, our proposed method is not only effective for data augmentation, but also helpful to improve the accuracy of target detection and image classification.","PeriodicalId":13217,"journal":{"name":"IEEE Transactions on Image Processing","volume":"12 1","pages":"3713-3725"},"PeriodicalIF":10.6,"publicationDate":"2022-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62591682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Image noise removal is a common problem with many proposed solutions. The current standard is set by learning-based approaches, however these are not appropriate in all scenarios, perhaps due to lack of training data or the need for predictability in novel circumstances. The bitonic filter is a non-learning-based filter for removing noise from signals, with a mathematical morphology (ranking) framework in which the signal is postulated to be locally bitonic (having only one minimum or maximum) over some domain of finite extent. A novel version of this filter is developed in this paper, with a domain that is locally-adaptive to the signal, and other adjustments to allow application to real image sensor noise. These lead to significant improvements in noise reduction performance at no cost to processing times. The new bitonic filter performs better than the block-matching 3D filter for high levels of additive white Gaussian noise. It also surpasses this and other more recent non-learning-based filters for two public data sets containing real image noise at various levels. This is despite an additional adjustment to the block-matching filter, which leads to significantly better performance than has previously been cited on these data sets. The new bitonic filter has a signal-to-noise ratio 2.4dB lower than the best learning-based techniques when they are optimally trained. However, the performance gap is closed completely when these techniques are trained on data sets not directly related to the benchmark data. This demonstrates what can be achieved with a predictable, explainable, entirely local technique, which makes no assumptions of repeating patterns either within an image or across images, and hence creates residual images which are well behaved even in very high noise. Since the filter does not require training, it can still be used in situations where training is either difficult or inappropriate.
{"title":"Real Image Denoising With a Locally-Adaptive Bitonic Filter","authors":"Graham M. Treece","doi":"10.17863/CAM.75234","DOIUrl":"https://doi.org/10.17863/CAM.75234","url":null,"abstract":"Image noise removal is a common problem with many proposed solutions. The current standard is set by learning-based approaches, however these are not appropriate in all scenarios, perhaps due to lack of training data or the need for predictability in novel circumstances. The bitonic filter is a non-learning-based filter for removing noise from signals, with a mathematical morphology (ranking) framework in which the signal is postulated to be locally bitonic (having only one minimum or maximum) over some domain of finite extent. A novel version of this filter is developed in this paper, with a domain that is locally-adaptive to the signal, and other adjustments to allow application to real image sensor noise. These lead to significant improvements in noise reduction performance at no cost to processing times. The new bitonic filter performs better than the block-matching 3D filter for high levels of additive white Gaussian noise. It also surpasses this and other more recent non-learning-based filters for two public data sets containing real image noise at various levels. This is despite an additional adjustment to the block-matching filter, which leads to significantly better performance than has previously been cited on these data sets. The new bitonic filter has a signal-to-noise ratio 2.4dB lower than the best learning-based techniques when they are optimally trained. However, the performance gap is closed completely when these techniques are trained on data sets not directly related to the benchmark data. This demonstrates what can be achieved with a predictable, explainable, entirely local technique, which makes no assumptions of repeating patterns either within an image or across images, and hence creates residual images which are well behaved even in very high noise. Since the filter does not require training, it can still be used in situations where training is either difficult or inappropriate.","PeriodicalId":13217,"journal":{"name":"IEEE Transactions on Image Processing","volume":"31 1","pages":"3151-3165"},"PeriodicalIF":10.6,"publicationDate":"2021-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47479607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-15DOI: 10.36227/techrxiv.15032052.v1
Tomás M. Borges, Diogo C. Garcia, R. Queiroz
We present a method to super-resolve voxelized point clouds downsampled by a fractional factor, using lookup-tables (LUT) constructed from self-similarities from their own downsampled neighborhoods. The proposed method was developed to densify and to increase the precision of voxelized point clouds, and can be used, for example, as improve compression and rendering. We super-resolve the geometry, but we also interpolate texture by averaging colors from adjacent neighbors, for completeness. Our technique, as we understand, is the first specifically developed for intra-frame super-resolution of voxelized point clouds, for arbitrary resampling scale factors. We present extensive test results over different point clouds, showing the effectiveness of the proposed approach against baseline methods.
{"title":"Fractional Super-Resolution of Voxelized Point Clouds","authors":"Tomás M. Borges, Diogo C. Garcia, R. Queiroz","doi":"10.36227/techrxiv.15032052.v1","DOIUrl":"https://doi.org/10.36227/techrxiv.15032052.v1","url":null,"abstract":"We present a method to super-resolve voxelized point clouds downsampled by a fractional factor, using lookup-tables (LUT) constructed from self-similarities from their own downsampled neighborhoods. The proposed method was developed to densify and to increase the precision of voxelized point clouds, and can be used, for example, as improve compression and rendering. We super-resolve the geometry, but we also interpolate texture by averaging colors from adjacent neighbors, for completeness. Our technique, as we understand, is the first specifically developed for intra-frame super-resolution of voxelized point clouds, for arbitrary resampling scale factors. We present extensive test results over different point clouds, showing the effectiveness of the proposed approach against baseline methods.","PeriodicalId":13217,"journal":{"name":"IEEE Transactions on Image Processing","volume":" ","pages":"1-1"},"PeriodicalIF":10.6,"publicationDate":"2021-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46834443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}