Test Time Adaptation with Regularized Loss for Weakly Supervised Salient Object Detection

O. Veksler
{"title":"Test Time Adaptation with Regularized Loss for Weakly Supervised Salient Object Detection","authors":"O. Veksler","doi":"10.1109/CVPR52729.2023.00711","DOIUrl":null,"url":null,"abstract":"It is well known that CNNs tend to overfit to the training data. Test-time adaptation is an extreme approach to deal with overfitting: given a test image, the aim is to adapt the trained model to that image. Indeed nothing can be closer to the test data than the test image itself. The main difficulty of test-time adaptation is that the ground truth is not available. Thus test-time adaptation, while intriguing, applies to only a few scenarios where one can design an effective loss function that does not require ground truth. We propose the first approach for test-time Salient Object Detection (SOD) in the context of weak supervision. Our approach is based on a so called regularized loss function, which can be used for training CNN when pixel precise ground truth is unavail-able. Regularized loss tends to have lower values for the more likely object segments, and thus it can be used to fine-tune an already trained CNN to a given test image, adapting to images unseen during training. We develop a regularized loss function particularly suitable for test-time adaptation and show that our approach significantly outperforms prior work for weakly supervised SOD.","PeriodicalId":376416,"journal":{"name":"2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)","volume":"59 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CVPR52729.2023.00711","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

It is well known that CNNs tend to overfit to the training data. Test-time adaptation is an extreme approach to deal with overfitting: given a test image, the aim is to adapt the trained model to that image. Indeed nothing can be closer to the test data than the test image itself. The main difficulty of test-time adaptation is that the ground truth is not available. Thus test-time adaptation, while intriguing, applies to only a few scenarios where one can design an effective loss function that does not require ground truth. We propose the first approach for test-time Salient Object Detection (SOD) in the context of weak supervision. Our approach is based on a so called regularized loss function, which can be used for training CNN when pixel precise ground truth is unavail-able. Regularized loss tends to have lower values for the more likely object segments, and thus it can be used to fine-tune an already trained CNN to a given test image, adapting to images unseen during training. We develop a regularized loss function particularly suitable for test-time adaptation and show that our approach significantly outperforms prior work for weakly supervised SOD.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
弱监督显著性目标检测的正则化损失测试时间自适应
众所周知,cnn倾向于过拟合训练数据。测试时间适应是处理过拟合的一种极端方法:给定一个测试图像,目标是使训练模型适应该图像。实际上,没有什么比测试图像本身更接近测试数据了。测试时间适应的主要困难是无法获得实际情况。因此,测试时间适应,虽然有趣,只适用于少数情况下,人们可以设计一个有效的损失函数,不需要基础的真理。本文提出了弱监督下测试时间显著目标检测的第一种方法。我们的方法是基于一个所谓的正则化损失函数,它可以用于训练CNN,当像素精确的地面真相是不可用的。对于更可能的目标片段,正则化损失往往具有更低的值,因此它可以用于对已经训练好的CNN进行微调,以适应给定的测试图像,适应训练期间未见过的图像。我们开发了一个特别适合于测试时间适应的正则化损失函数,并表明我们的方法明显优于弱监督SOD的先前工作。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
L-CoIns: Language-based Colorization With Instance Awareness Neural Texture Synthesis with Guided Correspondence LOGO: A Long-Form Video Dataset for Group Action Quality Assessment ERM-KTP: Knowledge-Level Machine Unlearning via Knowledge Transfer Target-referenced Reactive Grasping for Dynamic Objects
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1