Enhancing Few-Shot Out-of-Distribution Detection With Pre-Trained Model Features

Jiuqing Dong;Yifan Yao;Wei Jin;Heng Zhou;Yongbin Gao;Zhijun Fang
{"title":"Enhancing Few-Shot Out-of-Distribution Detection With Pre-Trained Model Features","authors":"Jiuqing Dong;Yifan Yao;Wei Jin;Heng Zhou;Yongbin Gao;Zhijun Fang","doi":"10.1109/TIP.2024.3468874","DOIUrl":null,"url":null,"abstract":"Ensuring the reliability of open-world intelligent systems heavily relies on effective out-of-distribution (OOD) detection. Despite notable successes in existing OOD detection methods, their performance in scenarios with limited training samples is still suboptimal. Therefore, we first construct a comprehensive few-shot OOD detection benchmark in this paper. Remarkably, our investigation reveals that Parameter-Efficient Fine-Tuning (PEFT) techniques, such as visual prompt tuning and visual adapter tuning, outperform traditional methods like fully fine-tuning and linear probing tuning in few-shot OOD detection. Considering that some valuable information from the pre-trained model, which is conducive to OOD detection, may be lost during the fine-tuning process, we reutilize features from the pre-trained models to mitigate this issue. Specifically, we first propose a training-free approach, termed uncertainty score ensemble (USE). This method integrates feature-matching scores to enhance existing OOD detection methods, significantly narrowing the gap between traditional fine-tuning and PEFT techniques. However, due to its training-free property, this method is unable to improve in-distribution accuracy. To this end, we further propose a method called Domain-Specific and General Knowledge Fusion (DSGF) to improve few-shot OOD detection performance and ID accuracy under different fine-tuning paradigms. Experiment results demonstrate that DSGF enhances few-shot OOD detection across different fine-tuning strategies, shot settings, and OOD detection methods. We believe our work can provide the research community with a novel path to leveraging large-scale visual pre-trained models for addressing FS-OOD detection. The code will be released.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"33 ","pages":"6309-6323"},"PeriodicalIF":0.0000,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10735106/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Ensuring the reliability of open-world intelligent systems heavily relies on effective out-of-distribution (OOD) detection. Despite notable successes in existing OOD detection methods, their performance in scenarios with limited training samples is still suboptimal. Therefore, we first construct a comprehensive few-shot OOD detection benchmark in this paper. Remarkably, our investigation reveals that Parameter-Efficient Fine-Tuning (PEFT) techniques, such as visual prompt tuning and visual adapter tuning, outperform traditional methods like fully fine-tuning and linear probing tuning in few-shot OOD detection. Considering that some valuable information from the pre-trained model, which is conducive to OOD detection, may be lost during the fine-tuning process, we reutilize features from the pre-trained models to mitigate this issue. Specifically, we first propose a training-free approach, termed uncertainty score ensemble (USE). This method integrates feature-matching scores to enhance existing OOD detection methods, significantly narrowing the gap between traditional fine-tuning and PEFT techniques. However, due to its training-free property, this method is unable to improve in-distribution accuracy. To this end, we further propose a method called Domain-Specific and General Knowledge Fusion (DSGF) to improve few-shot OOD detection performance and ID accuracy under different fine-tuning paradigms. Experiment results demonstrate that DSGF enhances few-shot OOD detection across different fine-tuning strategies, shot settings, and OOD detection methods. We believe our work can provide the research community with a novel path to leveraging large-scale visual pre-trained models for addressing FS-OOD detection. The code will be released.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
利用预训练模型特征增强少发分布外检测能力
保证开放世界智能系统的可靠性在很大程度上依赖于有效的分布外(OOD)检测。尽管现有的OOD检测方法取得了显著的成功,但它们在训练样本有限的情况下的性能仍然不够理想。因此,本文首先构建了一个综合的少弹OOD检测基准。值得注意的是,我们的研究表明,参数高效微调(PEFT)技术,如视觉提示调谐和视觉适配器调谐,在少量OOD检测中优于传统方法,如完全微调和线性探测调谐。考虑到预训练模型中一些有利于OOD检测的有价值信息可能在微调过程中丢失,我们利用预训练模型中的特征来缓解这一问题。具体来说,我们首先提出了一种不需要训练的方法,称为不确定性评分集成(USE)。该方法集成了特征匹配分数,增强了现有的OOD检测方法,显著缩小了传统微调与PEFT技术之间的差距。然而,由于该方法不需要训练,因此无法提高分布精度。为此,我们进一步提出了一种称为Domain-Specific and General Knowledge Fusion (DSGF)的方法,以提高不同微调范式下的少射OOD检测性能和ID精度。实验结果表明,DSGF在不同的微调策略、镜头设置和OOD检测方法中都增强了少镜头OOD检测。我们相信我们的工作可以为研究界提供一条利用大规模视觉预训练模型来解决FS-OOD检测的新途径。代码将被发布。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Rethinking Feature Reconstruction via Category Prototype in Semantic Segmentation Spiking Neural Networks With Adaptive Membrane Time Constant for Event-Based Tracking Self-Supervised Monocular Depth Estimation With Dual-Path Encoders and Offset Field Interpolation Hyperspectral Image Classification via Cascaded Spatial Cross-Attention Network A New Cross-Space Total Variation Regularization Model for Color Image Restoration With Quaternion Blur Operator
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1