CoNet: A Consistency-Oriented Network for Camouflaged Object Segmentation

IF 11.1 1区 工程技术 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC IEEE Transactions on Circuits and Systems for Video Technology Pub Date : 2024-09-17 DOI:10.1109/TCSVT.2024.3462465
Fei Wu;Jun Yin;Xiaochuan Li;Jianfeng Wu;Da Jin;Jiamin Yang
{"title":"CoNet: A Consistency-Oriented Network for Camouflaged Object Segmentation","authors":"Fei Wu;Jun Yin;Xiaochuan Li;Jianfeng Wu;Da Jin;Jiamin Yang","doi":"10.1109/TCSVT.2024.3462465","DOIUrl":null,"url":null,"abstract":"Camouflaged object segmentation (COS) is a recently emerging task due to its broad application prospect. The coloration and texture similarities between the objects and their surroundings makes it a challenging task. Motivated by this, we propose a consistency-oriented network (CoNet) to address these challenges by looking into the visual consistencies between object and background. Specifically, we design a primary detection module (PDM) to firstly locate the object by fusing the backbone features. A filter is introduced to better focus on the object’s foreground feature based on its primary location. To obtain the visual consistency between the object and background, the foreground feature is then fed into the consistency evaluation module (CEM) to interact with the global feature. Both features are simultaneously processed by a shared discriminator and then fused together to attain the consistency attention map. The final feature refinement is conducted in the detail refinement module (DRM) by merging the consistency attention map with the global features via hierarchical feature fusion. Extensive experiments on benchmark COS datasets show that the proposed CoNet outperforms the state-of-the-art (SOTA) models in most cases. Ablation experiments verify the effectiveness of different backbones, designed modules and upsampling methods. Furthermore, extra studies on the labelling techniques and interdisciplinary applications demonstrate the great potential of the proposed CoNet.","PeriodicalId":13082,"journal":{"name":"IEEE Transactions on Circuits and Systems for Video Technology","volume":"35 1","pages":"287-299"},"PeriodicalIF":11.1000,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Circuits and Systems for Video Technology","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10681598/","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

Abstract

Camouflaged object segmentation (COS) is a recently emerging task due to its broad application prospect. The coloration and texture similarities between the objects and their surroundings makes it a challenging task. Motivated by this, we propose a consistency-oriented network (CoNet) to address these challenges by looking into the visual consistencies between object and background. Specifically, we design a primary detection module (PDM) to firstly locate the object by fusing the backbone features. A filter is introduced to better focus on the object’s foreground feature based on its primary location. To obtain the visual consistency between the object and background, the foreground feature is then fed into the consistency evaluation module (CEM) to interact with the global feature. Both features are simultaneously processed by a shared discriminator and then fused together to attain the consistency attention map. The final feature refinement is conducted in the detail refinement module (DRM) by merging the consistency attention map with the global features via hierarchical feature fusion. Extensive experiments on benchmark COS datasets show that the proposed CoNet outperforms the state-of-the-art (SOTA) models in most cases. Ablation experiments verify the effectiveness of different backbones, designed modules and upsampling methods. Furthermore, extra studies on the labelling techniques and interdisciplinary applications demonstrate the great potential of the proposed CoNet.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
CoNet:用于伪装物体分割的一致性导向网络
伪装目标分割(COS)是一项新兴的任务,具有广阔的应用前景。物体与周围环境之间的颜色和纹理相似性使其成为一项具有挑战性的任务。基于此,我们提出了一个面向一致性的网络(CoNet),通过观察对象和背景之间的视觉一致性来解决这些挑战。具体来说,我们设计了一个主检测模块(PDM),通过融合主干特征对目标进行首次定位。引入滤镜,根据物体的主要位置更好地聚焦物体的前景特征。为了获得目标与背景之间的视觉一致性,将前景特征输入一致性评估模块(CEM)与全局特征交互。两个特征通过共享识别器同时处理,然后融合在一起得到一致性注意图。最后的特征细化在细节细化模块(DRM)中进行,通过分层特征融合将一致性注意图与全局特征合并。在基准COS数据集上的大量实验表明,在大多数情况下,所提出的CoNet优于最先进的(SOTA)模型。烧蚀实验验证了不同主干网、设计模块和上采样方法的有效性。此外,对标记技术和跨学科应用的额外研究表明,拟议的CoNet具有巨大的潜力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
13.80
自引率
27.40%
发文量
660
审稿时长
5 months
期刊介绍: The IEEE Transactions on Circuits and Systems for Video Technology (TCSVT) is dedicated to covering all aspects of video technologies from a circuits and systems perspective. We encourage submissions of general, theoretical, and application-oriented papers related to image and video acquisition, representation, presentation, and display. Additionally, we welcome contributions in areas such as processing, filtering, and transforms; analysis and synthesis; learning and understanding; compression, transmission, communication, and networking; as well as storage, retrieval, indexing, and search. Furthermore, papers focusing on hardware and software design and implementation are highly valued. Join us in advancing the field of video technology through innovative research and insights.
期刊最新文献
IEEE Circuits and Systems Society Information IEEE Circuits and Systems Society Information 2025 Index IEEE Transactions on Circuits and Systems for Video Technology IEEE Circuits and Systems Society Information IEEE Circuits and Systems Society Information
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1