Learning to Detect Salient Object With Multi-Source Weak Supervision

Hongshuang Zhang;Yu Zeng;Huchuan Lu;Lihe Zhang;Jianhua Li;Jinqing Qi
{"title":"Learning to Detect Salient Object With Multi-Source Weak Supervision","authors":"Hongshuang Zhang;Yu Zeng;Huchuan Lu;Lihe Zhang;Jianhua Li;Jinqing Qi","doi":"10.1109/TPAMI.2021.3059783","DOIUrl":null,"url":null,"abstract":"High-cost pixel-level annotations makes it appealing to train saliency detection models with weak supervision. However, a single weak supervision source hardly contain enough information to train a well-performing model. To this end, we introduce a unified two-stage framework to learn from category labels, captions, web images and unlabeled images. In the first stage, we design a classification network (CNet) and a caption generation network (PNet), which learn to predict object categories and generate captions, respectively, meanwhile highlights the potential foreground regions. We present an attention transfer loss to transmit supervisions between two tasks and an attention coherence loss to encourage the networks to detect generally salient regions instead of task-specific regions. In the second stage, we create two complementary training datasets using CNet and PNet, i.e., natural image dataset with noisy labels for adapting saliency prediction network (SNet) to natural image input, and synthesized image dataset by pasting objects on background images for providing SNet with accurate ground-truth. During the testing phases, we only need SNet to predict saliency maps. Experiments indicate the performance of our method compares favorably against unsupervised, weakly supervised methods and even some supervised methods.","PeriodicalId":94034,"journal":{"name":"IEEE transactions on pattern analysis and machine intelligence","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2021-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1109/TPAMI.2021.3059783","citationCount":"11","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on pattern analysis and machine intelligence","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/9355009/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 11

Abstract

High-cost pixel-level annotations makes it appealing to train saliency detection models with weak supervision. However, a single weak supervision source hardly contain enough information to train a well-performing model. To this end, we introduce a unified two-stage framework to learn from category labels, captions, web images and unlabeled images. In the first stage, we design a classification network (CNet) and a caption generation network (PNet), which learn to predict object categories and generate captions, respectively, meanwhile highlights the potential foreground regions. We present an attention transfer loss to transmit supervisions between two tasks and an attention coherence loss to encourage the networks to detect generally salient regions instead of task-specific regions. In the second stage, we create two complementary training datasets using CNet and PNet, i.e., natural image dataset with noisy labels for adapting saliency prediction network (SNet) to natural image input, and synthesized image dataset by pasting objects on background images for providing SNet with accurate ground-truth. During the testing phases, we only need SNet to predict saliency maps. Experiments indicate the performance of our method compares favorably against unsupervised, weakly supervised methods and even some supervised methods.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
利用多源弱监督学习检测突出目标
高成本的像素级注释使得训练具有弱监督的显著性检测模型具有吸引力。然而,单一的弱监督源很难包含足够的信息来训练一个性能良好的模型。为此,我们引入了一个统一的两阶段框架来学习类别标签、标题、网络图像和未标记图像。在第一阶段,我们设计了一个分类网络(CNet)和字幕生成网络(PNet),分别学习预测对象类别和生成字幕,同时突出潜在的前景区域。我们提出了一种在两个任务之间传递监督的注意力转移损失和一种鼓励网络检测一般显著区域而不是任务特定区域的注意力连贯性损失。在第二阶段,我们使用CNet和PNet创建了两个互补的训练数据集,即具有噪声标签的自然图像数据集,用于使显著性预测网络(SNet)适应自然图像输入,以及通过在背景图像上粘贴对象来合成图像数据集以向SNet提供准确的地面实况。在测试阶段,我们只需要SNet来预测显著性图。实验表明,与无监督、弱监督甚至某些监督方法相比,我们的方法具有良好的性能。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Changen2: Multi-Temporal Remote Sensing Generative Change Foundation Model. Continuous-time Object Segmentation using High Temporal Resolution Event Camera. Dual-grained Lightweight Strategy. Fast Window-Based Event Denoising with Spatiotemporal Correlation Enhancement. Robust Multimodal Learning with Missing Modalities via Parameter-Efficient Adaptation.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1