首页 > 最新文献

IEEE Transactions on Image Processing最新文献

英文 中文
Recalling Unknowns without Losing Precision: An Effective Solution to Large Model-Guided Open World Object Detection 在不损失精确度的情况下恢复未知数据:大型模型引导的开放世界物体检测的有效解决方案
IF 10.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-18 DOI: 10.1109/tip.2024.3459589
Yulin He, Wei Chen, Siqi Wang, Tianrui Liu, Meng Wang
{"title":"Recalling Unknowns without Losing Precision: An Effective Solution to Large Model-Guided Open World Object Detection","authors":"Yulin He, Wei Chen, Siqi Wang, Tianrui Liu, Meng Wang","doi":"10.1109/tip.2024.3459589","DOIUrl":"https://doi.org/10.1109/tip.2024.3459589","url":null,"abstract":"","PeriodicalId":13217,"journal":{"name":"IEEE Transactions on Image Processing","volume":"4 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142245415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HeightFormer: Explicit Height Modeling without Extra Data for Camera-only 3D Object Detection in Bird’s Eye View HeightFormer:明确的高度建模,无需额外数据,实现鸟瞰图中仅摄像头的 3D 物体检测
IF 10.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-09 DOI: 10.1109/tip.2024.3427701
Yiming Wu, Ruixiang Li, Zequn Qin, Xinhai Zhao, Xi Li
{"title":"HeightFormer: Explicit Height Modeling without Extra Data for Camera-only 3D Object Detection in Bird’s Eye View","authors":"Yiming Wu, Ruixiang Li, Zequn Qin, Xinhai Zhao, Xi Li","doi":"10.1109/tip.2024.3427701","DOIUrl":"https://doi.org/10.1109/tip.2024.3427701","url":null,"abstract":"","PeriodicalId":13217,"journal":{"name":"IEEE Transactions on Image Processing","volume":"63 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142160432","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Nonconvex Robust High-Order Tensor Completion Using Randomized Low-Rank Approximation 使用随机低库近似的非凸稳健高阶张量补全
IF 10.6 1区 计算机科学 Q1 Computer Science Pub Date : 2024-04-10 DOI: 10.1109/tip.2024.3385284
Wenjin Qin, Hailin Wang, Feng Zhang, Weijun Ma, Jianjun Wang, Tingwen Huang
{"title":"Nonconvex Robust High-Order Tensor Completion Using Randomized Low-Rank Approximation","authors":"Wenjin Qin, Hailin Wang, Feng Zhang, Weijun Ma, Jianjun Wang, Tingwen Huang","doi":"10.1109/tip.2024.3385284","DOIUrl":"https://doi.org/10.1109/tip.2024.3385284","url":null,"abstract":"","PeriodicalId":13217,"journal":{"name":"IEEE Transactions on Image Processing","volume":"22 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2024-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140544990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Relationship-Incremental Scene Graph Generation by a Divide-and-Conquer Pipeline with Feature Adapter 通过带有特征适配器的分而治之流水线生成关系递增场景图
IF 10.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-04-08 DOI: 10.1109/tip.2024.3384096
Xuewei Li, Guangcong Zheng, Yunlong Yu, Naye Ji, Xi Li
{"title":"Relationship-Incremental Scene Graph Generation by a Divide-and-Conquer Pipeline with Feature Adapter","authors":"Xuewei Li, Guangcong Zheng, Yunlong Yu, Naye Ji, Xi Li","doi":"10.1109/tip.2024.3384096","DOIUrl":"https://doi.org/10.1109/tip.2024.3384096","url":null,"abstract":"","PeriodicalId":13217,"journal":{"name":"IEEE Transactions on Image Processing","volume":"16 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2024-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140538492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards Transparent Deep Image Aesthetics Assessment with Tag-based Content Descriptors. 利用基于标签的内容描述符实现透明的深度图像美学评估
IF 10.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-08-30 DOI: 10.1109/TIP.2023.3308852
Jingwen Hou, Weisi Lin, Yuming Fang, Haoning Wu, Chaofeng Chen, Liang Liao, Weide Liu

Deep learning approaches for Image Aesthetics Assessment (IAA) have shown promising results in recent years, but the internal mechanisms of these models remain unclear. Previous studies have demonstrated that image aesthetics can be predicted using semantic features, such as pre-trained object classification features. However, these semantic features are learned implicitly, and therefore, previous works have not elucidated what the semantic features are representing. In this work, we aim to create a more transparent deep learning framework for IAA by introducing explainable semantic features. To achieve this, we propose Tag-based Content Descriptors (TCDs), where each value in a TCD describes the relevance of an image to a human-readable tag that refers to a specific type of image content. This allows us to build IAA models from explicit descriptions of image contents. We first propose the explicit matching process to produce TCDs that adopt predefined tags to describe image contents. We show that a simple MLP-based IAA model with TCDs only based on predefined tags can achieve an SRCC of 0.767, which is comparable to most state-of-the-art methods. However, predefined tags may not be sufficient to describe all possible image contents that the model may encounter. Therefore, we further propose the implicit matching process to describe image contents that cannot be described by predefined tags. By integrating components obtained from the implicit matching process into TCDs, the IAA model further achieves an SRCC of 0.817, which significantly outperforms existing IAA methods. Both the explicit matching process and the implicit matching process are realized by the proposed TCD generator. To evaluate the performance of the proposed TCD generator in matching images with predefined tags, we also labeled 5101 images with photography-related tags to form a validation set. And experimental results show that the proposed TCD generator can meaningfully assign photography-related tags to images.

近年来,用于图像美学评估(IAA)的深度学习方法取得了可喜的成果,但这些模型的内部机制仍不清楚。以往的研究表明,图像美学可以通过语义特征(如预训练的对象分类特征)来预测。然而,这些语义特征都是隐式学习的,因此以往的研究并未阐明这些语义特征代表了什么。在这项工作中,我们旨在通过引入可解释的语义特征,为 IAA 创建一个更加透明的深度学习框架。为此,我们提出了基于标签的内容描述符(TCD),TCD 中的每个值都描述了图像与人类可读标签的相关性,该标签指的是特定类型的图像内容。这样,我们就能根据图像内容的明确描述建立 IAA 模型。我们首先提出了显式匹配流程,以生成采用预定义标签来描述图像内容的 TCD。我们的研究表明,一个简单的基于 MLP 的 IAA 模型,其 TCD 仅基于预定义标签,就能达到 0.767 的 SRCC,与大多数最先进的方法不相上下。然而,预定义标签可能不足以描述模型可能遇到的所有图像内容。因此,我们进一步提出了隐式匹配过程,以描述预定义标签无法描述的图像内容。通过将隐式匹配过程获得的组件集成到 TCD 中,IAA 模型的 SRCC 进一步达到了 0.817,明显优于现有的 IAA 方法。显式匹配过程和隐式匹配过程均由所提出的 TCD 生成器实现。为了评估所提出的 TCD 生成器在匹配带有预定义标签的图像方面的性能,我们还为 5101 幅图像标注了与摄影相关的标签,以形成验证集。实验结果表明,建议的 TCD 生成器可以为图像分配有意义的摄影相关标签。
{"title":"Towards Transparent Deep Image Aesthetics Assessment with Tag-based Content Descriptors.","authors":"Jingwen Hou, Weisi Lin, Yuming Fang, Haoning Wu, Chaofeng Chen, Liang Liao, Weide Liu","doi":"10.1109/TIP.2023.3308852","DOIUrl":"10.1109/TIP.2023.3308852","url":null,"abstract":"<p><p>Deep learning approaches for Image Aesthetics Assessment (IAA) have shown promising results in recent years, but the internal mechanisms of these models remain unclear. Previous studies have demonstrated that image aesthetics can be predicted using semantic features, such as pre-trained object classification features. However, these semantic features are learned implicitly, and therefore, previous works have not elucidated what the semantic features are representing. In this work, we aim to create a more transparent deep learning framework for IAA by introducing explainable semantic features. To achieve this, we propose Tag-based Content Descriptors (TCDs), where each value in a TCD describes the relevance of an image to a human-readable tag that refers to a specific type of image content. This allows us to build IAA models from explicit descriptions of image contents. We first propose the explicit matching process to produce TCDs that adopt predefined tags to describe image contents. We show that a simple MLP-based IAA model with TCDs only based on predefined tags can achieve an SRCC of 0.767, which is comparable to most state-of-the-art methods. However, predefined tags may not be sufficient to describe all possible image contents that the model may encounter. Therefore, we further propose the implicit matching process to describe image contents that cannot be described by predefined tags. By integrating components obtained from the implicit matching process into TCDs, the IAA model further achieves an SRCC of 0.817, which significantly outperforms existing IAA methods. Both the explicit matching process and the implicit matching process are realized by the proposed TCD generator. To evaluate the performance of the proposed TCD generator in matching images with predefined tags, we also labeled 5101 images with photography-related tags to form a validation set. And experimental results show that the proposed TCD generator can meaningfully assign photography-related tags to images.</p>","PeriodicalId":13217,"journal":{"name":"IEEE Transactions on Image Processing","volume":"PP ","pages":""},"PeriodicalIF":10.6,"publicationDate":"2023-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10207498","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Field-of-View IoU for Object Detection in 360° Images. 用于 360° 图像中物体检测的视场 IoU。
IF 10.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-07-21 DOI: 10.1109/TIP.2023.3296013
Miao Cao, Satoshi Ikehata, Kiyoharu Aizawa

360° cameras have gained popularity over the last few years. In this paper, we propose two fundamental techniques-Field-of-View IoU (FoV-IoU) and 360Augmentation for object detection in 360° images. Although most object detection neural networks designed for perspective images are applicable to 360° images in equirectangular projection (ERP) format, their performance deteriorates owing to the distortion in ERP images. Our method can be readily integrated with existing perspective object detectors and significantly improves the performance. The FoV-IoU computes the intersection-over-union of two Field-of-View bounding boxes in a spherical image which could be used for training, inference, and evaluation while 360Augmentation is a data augmentation technique specific to 360° object detection task which randomly rotates a spherical image and solves the bias due to the sphere-to-plane projection. We conduct extensive experiments on the 360° indoor dataset with different types of perspective object detectors and show the consistent effectiveness of our method.

360° 摄像机在过去几年中越来越受欢迎。在本文中,我们针对 360° 图像中的物体检测提出了两种基本技术--视场 IoU(FoV-IoU)和 360Augmentation。虽然大多数为透视图像设计的物体检测神经网络也适用于等角投影(ERP)格式的 360° 图像,但由于 ERP 图像的失真,它们的性能会下降。我们的方法可以很容易地与现有的透视物体检测器集成,并显著提高性能。FoV-IoU 计算的是球形图像中两个视场边界框的相交-重合,可用于训练、推理和评估;而 360Augmentation 则是一种专门针对 360° 物体检测任务的数据增强技术,可随机旋转球形图像,解决球面到平面投影造成的偏差。我们在 360° 室内数据集上使用不同类型的透视物体检测器进行了大量实验,结果表明我们的方法具有一致的有效性。
{"title":"Field-of-View IoU for Object Detection in 360° Images.","authors":"Miao Cao, Satoshi Ikehata, Kiyoharu Aizawa","doi":"10.1109/TIP.2023.3296013","DOIUrl":"10.1109/TIP.2023.3296013","url":null,"abstract":"<p><p>360° cameras have gained popularity over the last few years. In this paper, we propose two fundamental techniques-Field-of-View IoU (FoV-IoU) and 360Augmentation for object detection in 360° images. Although most object detection neural networks designed for perspective images are applicable to 360° images in equirectangular projection (ERP) format, their performance deteriorates owing to the distortion in ERP images. Our method can be readily integrated with existing perspective object detectors and significantly improves the performance. The FoV-IoU computes the intersection-over-union of two Field-of-View bounding boxes in a spherical image which could be used for training, inference, and evaluation while 360Augmentation is a data augmentation technique specific to 360° object detection task which randomly rotates a spherical image and solves the bias due to the sphere-to-plane projection. We conduct extensive experiments on the 360° indoor dataset with different types of perspective object detectors and show the consistent effectiveness of our method.</p>","PeriodicalId":13217,"journal":{"name":"IEEE Transactions on Image Processing","volume":"PP ","pages":""},"PeriodicalIF":10.6,"publicationDate":"2023-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9848778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TGFuse: An Infrared and Visible Image Fusion Approach Based on Transformer and Generative Adversarial Network. TGFuse:基于变换器和生成对抗网络的红外与可见光图像融合方法。
IF 10.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-05-10 DOI: 10.1109/TIP.2023.3273451
Dongyu Rao, Tianyang Xu, Xiao-Jun Wu

The end-to-end image fusion framework has achieved promising performance, with dedicated convolutional networks aggregating the multi-modal local appearance. However, long-range dependencies are directly neglected in existing CNN fusion approaches, impeding balancing the entire image-level perception for complex scenario fusion. In this paper, therefore, we propose an infrared and visible image fusion algorithm based on the transformer module and adversarial learning. Inspired by the global interaction power, we use the transformer technique to learn the effective global fusion relations. In particular, shallow features extracted by CNN are interacted in the proposed transformer fusion module to refine the fusion relationship within the spatial scope and across channels simultaneously. Besides, adversarial learning is designed in the training process to improve the output discrimination via imposing competitive consistency from the inputs, reflecting the specific characteristics in infrared and visible images. The experimental performance demonstrates the effectiveness of the proposed modules, with superior improvement against the state-of-the-art, generalising a novel paradigm via transformer and adversarial learning in the fusion task.

端到端图像融合框架通过专用卷积网络聚合多模态局部外观,取得了可喜的性能。然而,现有的 CNN 融合方法直接忽略了长程依赖性,阻碍了复杂场景融合中整个图像级感知的平衡。因此,本文提出了一种基于变换器模块和对抗学习的红外与可见光图像融合算法。受全局交互能力的启发,我们利用变换器技术来学习有效的全局融合关系。其中,CNN 提取的浅层特征在所提出的变换器融合模块中进行交互,以同时完善空间范围内和跨信道的融合关系。此外,在训练过程中还设计了对抗学习,通过对输入施加竞争一致性来提高输出分辨能力,从而反映出红外图像和可见光图像的具体特征。实验结果证明了所提模块的有效性,与最先进的模块相比有了显著的提高,在融合任务中通过变换器和对抗学习推广了一种新的范式。
{"title":"TGFuse: An Infrared and Visible Image Fusion Approach Based on Transformer and Generative Adversarial Network.","authors":"Dongyu Rao, Tianyang Xu, Xiao-Jun Wu","doi":"10.1109/TIP.2023.3273451","DOIUrl":"10.1109/TIP.2023.3273451","url":null,"abstract":"<p><p>The end-to-end image fusion framework has achieved promising performance, with dedicated convolutional networks aggregating the multi-modal local appearance. However, long-range dependencies are directly neglected in existing CNN fusion approaches, impeding balancing the entire image-level perception for complex scenario fusion. In this paper, therefore, we propose an infrared and visible image fusion algorithm based on the transformer module and adversarial learning. Inspired by the global interaction power, we use the transformer technique to learn the effective global fusion relations. In particular, shallow features extracted by CNN are interacted in the proposed transformer fusion module to refine the fusion relationship within the spatial scope and across channels simultaneously. Besides, adversarial learning is designed in the training process to improve the output discrimination via imposing competitive consistency from the inputs, reflecting the specific characteristics in infrared and visible images. The experimental performance demonstrates the effectiveness of the proposed modules, with superior improvement against the state-of-the-art, generalising a novel paradigm via transformer and adversarial learning in the fusion task.</p>","PeriodicalId":13217,"journal":{"name":"IEEE Transactions on Image Processing","volume":"PP ","pages":""},"PeriodicalIF":10.6,"publicationDate":"2023-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9443051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
USOD10K: A New Benchmark Dataset for Underwater Salient Object Detection. USOD10K:水下突出物体检测的新基准数据集。
IF 10.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-04-14 DOI: 10.1109/TIP.2023.3266163
Lin Hong, Xin Wang, Gan Zhang, Ming Zhao

Underwater salient object detection (USOD) attracts increasing interest for its promising performance in various underwater visual tasks. However, USOD research is still in its early stages due to the lack of large-scale datasets within which salient objects are well-defined and pixel-wise annotated. To address this issue, this paper introduces a new dataset named USOD10K. It consists of 10,255 underwater images, covering 70 categories of salient objects in 12 different underwater scenes. In addition, salient object boundaries and depth maps of all images are provided in this dataset. The USOD10K is the first large-scale dataset in the USOD community, making a significant leap in diversity, complexity, and scalability. Secondly, a simple but strong baseline termed TC-USOD is designed for the USOD10K. The TC-USOD adopts a hybrid architecture based on an encoder-decoder design that leverages transformer and convolution as the basic computational building block of the encoder and decoder, respectively. Thirdly, we make a comprehensive summarization of 35 cutting-edge SOD/USOD methods and benchmark them over the existing USOD dataset and the USOD10K. The results show that our TC-USOD obtained superior performance on all datasets tested. Finally, several other use cases of the USOD10K are discussed, and future directions of USOD research are pointed out. This work will promote the development of the USOD research and facilitate further research on underwater visual tasks and visually-guided underwater robots. To pave the road in this research field, all the dataset, code, and benchmark results are publicly available: https://github.com/LinHong-HIT/USOD10K.

水下突出物体检测(USOD)因其在各种水下视觉任务中的良好表现而受到越来越多的关注。然而,由于缺乏对突出物体进行明确定义和像素注释的大规模数据集,水下突出物体检测研究仍处于早期阶段。为了解决这个问题,本文引入了一个名为 USOD10K 的新数据集。该数据集由 10,255 幅水下图像组成,涵盖 12 个不同水下场景中的 70 个突出物体类别。此外,该数据集还提供了所有图像的突出物体边界和深度图。USOD10K 是 USOD 社区的第一个大规模数据集,在多样性、复杂性和可扩展性方面实现了重大飞跃。其次,为 USOD10K 设计了一个简单但强大的基线,称为 TC-USOD。TC-USOD 采用基于编码器-解码器设计的混合架构,利用变压器和卷积分别作为编码器和解码器的基本计算构件。第三,我们对 35 种前沿 SOD/USOD 方法进行了全面总结,并在现有 USOD 数据集和 USOD10K 数据集上对这些方法进行了基准测试。结果表明,我们的 TC-USOD 在所有测试数据集上都取得了优异的性能。最后,讨论了 USOD10K 的其他几个使用案例,并指出了 USOD 研究的未来方向。这项工作将推动 USOD 研究的发展,促进水下视觉任务和视觉引导水下机器人的进一步研究。为了在这一研究领域铺平道路,所有数据集、代码和基准结果均可公开获取:https://github.com/LinHong-HIT/USOD10K。
{"title":"USOD10K: A New Benchmark Dataset for Underwater Salient Object Detection.","authors":"Lin Hong, Xin Wang, Gan Zhang, Ming Zhao","doi":"10.1109/TIP.2023.3266163","DOIUrl":"10.1109/TIP.2023.3266163","url":null,"abstract":"<p><p>Underwater salient object detection (USOD) attracts increasing interest for its promising performance in various underwater visual tasks. However, USOD research is still in its early stages due to the lack of large-scale datasets within which salient objects are well-defined and pixel-wise annotated. To address this issue, this paper introduces a new dataset named USOD10K. It consists of 10,255 underwater images, covering 70 categories of salient objects in 12 different underwater scenes. In addition, salient object boundaries and depth maps of all images are provided in this dataset. The USOD10K is the first large-scale dataset in the USOD community, making a significant leap in diversity, complexity, and scalability. Secondly, a simple but strong baseline termed TC-USOD is designed for the USOD10K. The TC-USOD adopts a hybrid architecture based on an encoder-decoder design that leverages transformer and convolution as the basic computational building block of the encoder and decoder, respectively. Thirdly, we make a comprehensive summarization of 35 cutting-edge SOD/USOD methods and benchmark them over the existing USOD dataset and the USOD10K. The results show that our TC-USOD obtained superior performance on all datasets tested. Finally, several other use cases of the USOD10K are discussed, and future directions of USOD research are pointed out. This work will promote the development of the USOD research and facilitate further research on underwater visual tasks and visually-guided underwater robots. To pave the road in this research field, all the dataset, code, and benchmark results are publicly available: https://github.com/LinHong-HIT/USOD10K.</p>","PeriodicalId":13217,"journal":{"name":"IEEE Transactions on Image Processing","volume":"PP ","pages":""},"PeriodicalIF":10.6,"publicationDate":"2023-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9781338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DVMark: A Deep Multiscale Framework for Video Watermarking. DVMark:用于视频水印的深度多尺度框架。
IF 10.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-03-28 DOI: 10.1109/TIP.2023.3251737
Xiyang Luo, Yinxiao Li, Huiwen Chang, Ce Liu, Peyman Milanfar, Feng Yang

Video watermarking embeds a message into a cover video in an imperceptible manner, which can be retrieved even if the video undergoes certain modifications or distortions. Traditional watermarking methods are often manually designed for particular types of distortions and thus cannot simultaneously handle a broad spectrum of distortions. To this end, we propose a robust deep learning-based solution for video watermarking that is end-to-end trainable. Our model consists of a novel multiscale design where the watermarks are distributed across multiple spatial-temporal scales. Extensive evaluations on a wide variety of distortions show that our method outperforms traditional video watermarking methods as well as deep image watermarking models by a large margin. We further demonstrate the practicality of our method on a realistic video-editing application.

视频水印以不易察觉的方式将信息嵌入封面视频中,即使视频发生了某些修改或失真,也能被检索到。传统的水印方法通常是针对特定类型的失真手动设计的,因此无法同时处理各种失真。为此,我们提出了一种基于深度学习的稳健的视频水印解决方案,该方案可进行端到端训练。我们的模型包含一种新颖的多尺度设计,其中的水印分布在多个时空尺度上。对各种失真的广泛评估表明,我们的方法远远优于传统的视频水印方法和深度图像水印模型。我们还在实际的视频编辑应用中进一步证明了我们方法的实用性。
{"title":"DVMark: A Deep Multiscale Framework for Video Watermarking.","authors":"Xiyang Luo, Yinxiao Li, Huiwen Chang, Ce Liu, Peyman Milanfar, Feng Yang","doi":"10.1109/TIP.2023.3251737","DOIUrl":"10.1109/TIP.2023.3251737","url":null,"abstract":"<p><p>Video watermarking embeds a message into a cover video in an imperceptible manner, which can be retrieved even if the video undergoes certain modifications or distortions. Traditional watermarking methods are often manually designed for particular types of distortions and thus cannot simultaneously handle a broad spectrum of distortions. To this end, we propose a robust deep learning-based solution for video watermarking that is end-to-end trainable. Our model consists of a novel multiscale design where the watermarks are distributed across multiple spatial-temporal scales. Extensive evaluations on a wide variety of distortions show that our method outperforms traditional video watermarking methods as well as deep image watermarking models by a large margin. We further demonstrate the practicality of our method on a realistic video-editing application.</p>","PeriodicalId":13217,"journal":{"name":"IEEE Transactions on Image Processing","volume":"PP ","pages":""},"PeriodicalIF":10.6,"publicationDate":"2023-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9266354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Rain Removal From Light Field Images With 4D Convolution and Multi-Scale Gaussian Process 基于4D卷积和多尺度高斯过程的光场图像去雨
IF 10.6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2022-08-16 DOI: 10.1109/TAP.2022.3218759
Zhiqiang Yuan, Jianhua Zhang, Yilin Ji, G. Pedersen, W. Fan
Existing deraining methods focus mainly on a single input image. However, with just a single input image, it is extremely difficult to accurately detect and remove rain streaks, in order to restore a rain-free image. In contrast, a light field image (LFI) embeds abundant 3D structure and texture information of the target scene by recording the direction and position of each incident ray via a plenoptic camera. LFIs are becoming popular in the computer vision and graphics communities. However, making full use of the abundant information available from LFIs, such as 2D array of sub-views and the disparity map of each sub-view, for effective rain removal is still a challenging problem. In this paper, we propose a novel method, 4D-MGP-SRRNet, for rain streak removal from LFIs. Our method takes as input all sub-views of a rainy LFI. To make full use of the LFI, it adopts 4D convolutional layers to simultaneously process all sub-views of the LFI. In the pipeline, the rain detection network, MGPDNet, with a novel Multi-scale Self-guided Gaussian Process (MSGP) module is proposed to detect high-resolution rain streaks from all sub-views of the input LFI at multi-scales. Semi-supervised learning is introduced for MSGP to accurately detect rain streaks by training on both virtual-world rainy LFIs and real-world rainy LFIs at multi-scales via computing pseudo ground truths for real-world rain streaks. We then feed all sub-views subtracting the predicted rain streaks into a 4D convolution-based Depth Estimation Residual Network (DERNet) to estimate the depth maps, which are later converted into fog maps. Finally, all sub-views concatenated with the corresponding rain streaks and fog maps are fed into a powerful rainy LFI restoring model based on the adversarial recurrent neural network to progressively eliminate rain streaks and recover the rain-free LFI. Extensive quantitative and qualitative evaluations conducted on both synthetic LFIs and real-world LFIs demonstrate the effectiveness of our proposed method.
现有的脱模方法主要集中在单个输入图像上。然而,只有一个单一的输入图像,很难准确地检测和去除雨纹,以恢复一个无雨的图像。而光场图像(LFI)通过全光相机记录每条入射光线的方向和位置,从而嵌入目标场景丰富的三维结构和纹理信息。lfi在计算机视觉和图形社区中越来越流行。然而,如何充分利用lfi提供的丰富信息,如子视图的二维阵列和每个子视图的视差图,进行有效的降雨清除仍然是一个具有挑战性的问题。在本文中,我们提出了一种新的方法4D-MGP-SRRNet,用于从lfi中去除雨纹。我们的方法将雨天LFI的所有子视图作为输入。为了充分利用LFI,采用4D卷积层同时处理LFI的所有子视图。在此基础上,提出了一种新的多尺度自引导高斯过程(MSGP)模块的降雨检测网络MGPDNet,用于从输入LFI的所有子视图中检测多尺度的高分辨率雨条。引入半监督学习的MSGP,通过对虚拟世界的雨天lfi和现实世界的雨天lfi在多尺度上进行训练,通过计算真实世界的雨条的伪地面真理,准确地检测雨条。然后,我们将减去预测雨条的所有子视图馈送到基于4D卷积的深度估计残差网络(DERNet)中,以估计深度图,然后将其转换为雾图。最后,将所有与相应雨条和雾图相关联的子视图输入到基于对抗递归神经网络的强大降雨LFI恢复模型中,逐步消除雨条,恢复无雨LFI。对合成lfi和实际lfi进行的大量定量和定性评估证明了我们提出的方法的有效性。
{"title":"Rain Removal From Light Field Images With 4D Convolution and Multi-Scale Gaussian Process","authors":"Zhiqiang Yuan, Jianhua Zhang, Yilin Ji, G. Pedersen, W. Fan","doi":"10.1109/TAP.2022.3218759","DOIUrl":"https://doi.org/10.1109/TAP.2022.3218759","url":null,"abstract":"Existing deraining methods focus mainly on a single input image. However, with just a single input image, it is extremely difficult to accurately detect and remove rain streaks, in order to restore a rain-free image. In contrast, a light field image (LFI) embeds abundant 3D structure and texture information of the target scene by recording the direction and position of each incident ray via a plenoptic camera. LFIs are becoming popular in the computer vision and graphics communities. However, making full use of the abundant information available from LFIs, such as 2D array of sub-views and the disparity map of each sub-view, for effective rain removal is still a challenging problem. In this paper, we propose a novel method, 4D-MGP-SRRNet, for rain streak removal from LFIs. Our method takes as input all sub-views of a rainy LFI. To make full use of the LFI, it adopts 4D convolutional layers to simultaneously process all sub-views of the LFI. In the pipeline, the rain detection network, MGPDNet, with a novel Multi-scale Self-guided Gaussian Process (MSGP) module is proposed to detect high-resolution rain streaks from all sub-views of the input LFI at multi-scales. Semi-supervised learning is introduced for MSGP to accurately detect rain streaks by training on both virtual-world rainy LFIs and real-world rainy LFIs at multi-scales via computing pseudo ground truths for real-world rain streaks. We then feed all sub-views subtracting the predicted rain streaks into a 4D convolution-based Depth Estimation Residual Network (DERNet) to estimate the depth maps, which are later converted into fog maps. Finally, all sub-views concatenated with the corresponding rain streaks and fog maps are fed into a powerful rainy LFI restoring model based on the adversarial recurrent neural network to progressively eliminate rain streaks and recover the rain-free LFI. Extensive quantitative and qualitative evaluations conducted on both synthetic LFIs and real-world LFIs demonstrate the effectiveness of our proposed method.","PeriodicalId":13217,"journal":{"name":"IEEE Transactions on Image Processing","volume":"32 1","pages":"921-936"},"PeriodicalIF":10.6,"publicationDate":"2022-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48830864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
期刊
IEEE Transactions on Image Processing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1