无处伪装:通过显著性属性转移发现伪装对象

Wenda Zhao;Shigeng Xie;Fan Zhao;You He;Huchuan Lu
{"title":"无处伪装:通过显著性属性转移发现伪装对象","authors":"Wenda Zhao;Shigeng Xie;Fan Zhao;You He;Huchuan Lu","doi":"10.1109/TIP.2023.3277793","DOIUrl":null,"url":null,"abstract":"Both salient object detection (SOD) and camouflaged object detection (COD) are typical object segmentation tasks. They are intuitively contradictory, but are intrinsically related. In this paper, we explore the relationship between SOD and COD, and then borrow successful SOD models to detect camouflaged objects to save the design cost of COD models. The core insight is that both SOD and COD leverage two aspects of information: object semantic representations for distinguishing object and background, and context attributes that decide object category. Specifically, we start by decoupling context attributes and object semantic representations from both SOD and COD datasets through designing a novel decoupling framework with triple measure constraints. Then, we transfer saliency context attributes to the camouflaged images through introducing an attribute transfer network. The generated weakly camouflaged images can bridge the context attribute gap between SOD and COD, thereby improving the SOD models’ performances on COD datasets. Comprehensive experiments on three widely-used COD datasets verify the ability of the proposed method. Code and model are available at: \n<uri>https://github.com/wdzhao123/SAT</uri>\n.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2023-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Nowhere to Disguise: Spot Camouflaged Objects via Saliency Attribute Transfer\",\"authors\":\"Wenda Zhao;Shigeng Xie;Fan Zhao;You He;Huchuan Lu\",\"doi\":\"10.1109/TIP.2023.3277793\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Both salient object detection (SOD) and camouflaged object detection (COD) are typical object segmentation tasks. They are intuitively contradictory, but are intrinsically related. In this paper, we explore the relationship between SOD and COD, and then borrow successful SOD models to detect camouflaged objects to save the design cost of COD models. The core insight is that both SOD and COD leverage two aspects of information: object semantic representations for distinguishing object and background, and context attributes that decide object category. Specifically, we start by decoupling context attributes and object semantic representations from both SOD and COD datasets through designing a novel decoupling framework with triple measure constraints. Then, we transfer saliency context attributes to the camouflaged images through introducing an attribute transfer network. The generated weakly camouflaged images can bridge the context attribute gap between SOD and COD, thereby improving the SOD models’ performances on COD datasets. Comprehensive experiments on three widely-used COD datasets verify the ability of the proposed method. Code and model are available at: \\n<uri>https://github.com/wdzhao123/SAT</uri>\\n.\",\"PeriodicalId\":94032,\"journal\":{\"name\":\"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-03-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10132418/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10132418/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

显著对象检测(SOD)和伪装对象检测(COD)都是典型的对象分割任务。它们在直觉上是矛盾的,但在本质上是相关的。在本文中,我们探索了SOD和COD之间的关系,然后借鉴成功的SOD模型来检测伪装物体,以节省COD模型的设计成本。核心见解是,SOD和COD都利用了两个方面的信息:区分对象和背景的对象语义表示,以及决定对象类别的上下文属性。具体来说,我们首先通过设计一个具有三重度量约束的新解耦框架,将SOD和COD数据集的上下文属性和对象语义表示解耦。然后,通过引入属性传递网络,将显著上下文属性传递到伪装图像。生成的弱伪装图像可以弥合SOD和COD之间的上下文属性差距,从而提高SOD模型在COD数据集上的性能。在三个广泛使用的COD数据集上进行的综合实验验证了该方法的能力。代码和型号位于:https://github.com/wdzhao123/SAT.
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Nowhere to Disguise: Spot Camouflaged Objects via Saliency Attribute Transfer
Both salient object detection (SOD) and camouflaged object detection (COD) are typical object segmentation tasks. They are intuitively contradictory, but are intrinsically related. In this paper, we explore the relationship between SOD and COD, and then borrow successful SOD models to detect camouflaged objects to save the design cost of COD models. The core insight is that both SOD and COD leverage two aspects of information: object semantic representations for distinguishing object and background, and context attributes that decide object category. Specifically, we start by decoupling context attributes and object semantic representations from both SOD and COD datasets through designing a novel decoupling framework with triple measure constraints. Then, we transfer saliency context attributes to the camouflaged images through introducing an attribute transfer network. The generated weakly camouflaged images can bridge the context attribute gap between SOD and COD, thereby improving the SOD models’ performances on COD datasets. Comprehensive experiments on three widely-used COD datasets verify the ability of the proposed method. Code and model are available at: https://github.com/wdzhao123/SAT .
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Learning Cross-Attention Point Transformer With Global Porous Sampling Salient Object Detection From Arbitrary Modalities GSSF: Generalized Structural Sparse Function for Deep Cross-Modal Metric Learning AnlightenDiff: Anchoring Diffusion Probabilistic Model on Low Light Image Enhancement Exploring Multi-Modal Contextual Knowledge for Open-Vocabulary Object Detection
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1