CATNet: A Cascaded and Aggregated Transformer Network for RGB-D Salient Object Detection

IF 8.4 1区 计算机科学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS IEEE Transactions on Multimedia Pub Date : 2023-07-11 DOI:10.1109/TMM.2023.3294003
Fuming Sun;Peng Ren;Bowen Yin;Fasheng Wang;Haojie Li
{"title":"CATNet: A Cascaded and Aggregated Transformer Network for RGB-D Salient Object Detection","authors":"Fuming Sun;Peng Ren;Bowen Yin;Fasheng Wang;Haojie Li","doi":"10.1109/TMM.2023.3294003","DOIUrl":null,"url":null,"abstract":"Salient object detection (SOD) is an important preprocessing operation for various computer vision tasks. Most of existing RGB-D SOD models employ additive or connected strategies to directly aggregate and decode multi-scale features to predict salient maps. However, due to the large differences between the features of different scales, these aggregation strategies adopted may lead to information loss or redundancy, and few methods explicitly consider how to establish connections between features at different scales in the decoding process, which consequently deteriorates the detection performance of the models. To this end, we propose a cascaded and aggregated Transformer Network (CATNet) which consists of three key modules, i.e., attention feature enhancement module (AFEM), cross-modal fusion module (CMFM) and cascaded correction decoder (CCD). Specifically, the AFEM is designed on the basis of atrous spatial pyramid pooling to obtain multi-scale semantic information and global context information in high-level features through dilated convolution and multi-head self-attention mechanism, enhancing high-level features. The role of the CMFM is to enhance and thereafter fuse the RGB features and depth features, alleviating the problem of poor-quality depth maps. The CCD is composed of two subdecoders in a cascading fashion. It is designed to suppress noise in low-level features and mitigate the differences between features at different scales. Moreover, the CCD uses a feedback mechanism to correct and repair the output of the subdecoder by exploiting supervised features, so that the problem of information loss caused by the upsampling operation during the multi-scale features aggregation process can be mitigated. Extensive experimental results demonstrate that the proposed CATNet achieves superior performance over 14 state-of-the-art RGB-D methods on 7 challenging benchmarks.","PeriodicalId":13273,"journal":{"name":"IEEE Transactions on Multimedia","volume":null,"pages":null},"PeriodicalIF":8.4000,"publicationDate":"2023-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Multimedia","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10179145/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

Salient object detection (SOD) is an important preprocessing operation for various computer vision tasks. Most of existing RGB-D SOD models employ additive or connected strategies to directly aggregate and decode multi-scale features to predict salient maps. However, due to the large differences between the features of different scales, these aggregation strategies adopted may lead to information loss or redundancy, and few methods explicitly consider how to establish connections between features at different scales in the decoding process, which consequently deteriorates the detection performance of the models. To this end, we propose a cascaded and aggregated Transformer Network (CATNet) which consists of three key modules, i.e., attention feature enhancement module (AFEM), cross-modal fusion module (CMFM) and cascaded correction decoder (CCD). Specifically, the AFEM is designed on the basis of atrous spatial pyramid pooling to obtain multi-scale semantic information and global context information in high-level features through dilated convolution and multi-head self-attention mechanism, enhancing high-level features. The role of the CMFM is to enhance and thereafter fuse the RGB features and depth features, alleviating the problem of poor-quality depth maps. The CCD is composed of two subdecoders in a cascading fashion. It is designed to suppress noise in low-level features and mitigate the differences between features at different scales. Moreover, the CCD uses a feedback mechanism to correct and repair the output of the subdecoder by exploiting supervised features, so that the problem of information loss caused by the upsampling operation during the multi-scale features aggregation process can be mitigated. Extensive experimental results demonstrate that the proposed CATNet achieves superior performance over 14 state-of-the-art RGB-D methods on 7 challenging benchmarks.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
CATNet:用于 RGB-D 突出物体检测的级联聚合变压器网络
突出物体检测(SOD)是各种计算机视觉任务的重要预处理操作。现有的 RGB-D SOD 模型大多采用加法或连接策略来直接聚合和解码多尺度特征,从而预测突出图。然而,由于不同尺度的特征之间存在较大差异,这些聚合策略可能会导致信息丢失或冗余,而且很少有方法明确考虑如何在解码过程中建立不同尺度特征之间的联系,从而降低了模型的检测性能。为此,我们提出了一种级联聚合变换器网络(CATNet),它由三个关键模块组成,即注意力特征增强模块(AFEM)、跨模态融合模块(CMFM)和级联校正解码器(CCD)。具体来说,注意力特征增强模块(AFEM)是在无柄空间金字塔池化的基础上设计的,通过扩张卷积和多头自注意力机制获取高层次特征中的多尺度语义信息和全局上下文信息,增强高层次特征。CMFM 的作用是增强并融合 RGB 特征和深度特征,从而缓解深度图质量不佳的问题。CCD 由两个级联方式的子解码器组成。其设计目的是抑制低级特征中的噪声,并减轻不同尺度特征之间的差异。此外,CCD 还采用反馈机制,通过利用监督特征来纠正和修复子解码器的输出,从而减轻多尺度特征聚合过程中的上采样操作所造成的信息丢失问题。广泛的实验结果表明,在 7 个具有挑战性的基准测试中,所提出的 CATNet 比 14 种最先进的 RGB-D 方法取得了更优异的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
IEEE Transactions on Multimedia
IEEE Transactions on Multimedia 工程技术-电信学
CiteScore
11.70
自引率
11.00%
发文量
576
审稿时长
5.5 months
期刊介绍: The IEEE Transactions on Multimedia delves into diverse aspects of multimedia technology and applications, covering circuits, networking, signal processing, systems, software, and systems integration. The scope aligns with the Fields of Interest of the sponsors, ensuring a comprehensive exploration of research in multimedia.
期刊最新文献
Deep Mutual Distillation for Unsupervised Domain Adaptation Person Re-identification Collaborative License Plate Recognition via Association Enhancement Network With Auxiliary Learning and a Unified Benchmark VLDadaptor: Domain Adaptive Object Detection with Vision-Language Model Distillation Camera-Incremental Object Re-Identification With Identity Knowledge Evolution Dual-View Data Hallucination With Semantic Relation Guidance for Few-Shot Image Recognition
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1