{"title":"Enhancing Few-Shot Out-of-Distribution Detection With Pre-Trained Model Features","authors":"Jiuqing Dong;Yifan Yao;Wei Jin;Heng Zhou;Yongbin Gao;Zhijun Fang","doi":"10.1109/TIP.2024.3468874","DOIUrl":null,"url":null,"abstract":"Ensuring the reliability of open-world intelligent systems heavily relies on effective out-of-distribution (OOD) detection. Despite notable successes in existing OOD detection methods, their performance in scenarios with limited training samples is still suboptimal. Therefore, we first construct a comprehensive few-shot OOD detection benchmark in this paper. Remarkably, our investigation reveals that Parameter-Efficient Fine-Tuning (PEFT) techniques, such as visual prompt tuning and visual adapter tuning, outperform traditional methods like fully fine-tuning and linear probing tuning in few-shot OOD detection. Considering that some valuable information from the pre-trained model, which is conducive to OOD detection, may be lost during the fine-tuning process, we reutilize features from the pre-trained models to mitigate this issue. Specifically, we first propose a training-free approach, termed uncertainty score ensemble (USE). This method integrates feature-matching scores to enhance existing OOD detection methods, significantly narrowing the gap between traditional fine-tuning and PEFT techniques. However, due to its training-free property, this method is unable to improve in-distribution accuracy. To this end, we further propose a method called Domain-Specific and General Knowledge Fusion (DSGF) to improve few-shot OOD detection performance and ID accuracy under different fine-tuning paradigms. Experiment results demonstrate that DSGF enhances few-shot OOD detection across different fine-tuning strategies, shot settings, and OOD detection methods. We believe our work can provide the research community with a novel path to leveraging large-scale visual pre-trained models for addressing FS-OOD detection. The code will be released.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"33 ","pages":"6309-6323"},"PeriodicalIF":0.0000,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10735106/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Ensuring the reliability of open-world intelligent systems heavily relies on effective out-of-distribution (OOD) detection. Despite notable successes in existing OOD detection methods, their performance in scenarios with limited training samples is still suboptimal. Therefore, we first construct a comprehensive few-shot OOD detection benchmark in this paper. Remarkably, our investigation reveals that Parameter-Efficient Fine-Tuning (PEFT) techniques, such as visual prompt tuning and visual adapter tuning, outperform traditional methods like fully fine-tuning and linear probing tuning in few-shot OOD detection. Considering that some valuable information from the pre-trained model, which is conducive to OOD detection, may be lost during the fine-tuning process, we reutilize features from the pre-trained models to mitigate this issue. Specifically, we first propose a training-free approach, termed uncertainty score ensemble (USE). This method integrates feature-matching scores to enhance existing OOD detection methods, significantly narrowing the gap between traditional fine-tuning and PEFT techniques. However, due to its training-free property, this method is unable to improve in-distribution accuracy. To this end, we further propose a method called Domain-Specific and General Knowledge Fusion (DSGF) to improve few-shot OOD detection performance and ID accuracy under different fine-tuning paradigms. Experiment results demonstrate that DSGF enhances few-shot OOD detection across different fine-tuning strategies, shot settings, and OOD detection methods. We believe our work can provide the research community with a novel path to leveraging large-scale visual pre-trained models for addressing FS-OOD detection. The code will be released.
保证开放世界智能系统的可靠性在很大程度上依赖于有效的分布外(OOD)检测。尽管现有的OOD检测方法取得了显著的成功,但它们在训练样本有限的情况下的性能仍然不够理想。因此,本文首先构建了一个综合的少弹OOD检测基准。值得注意的是,我们的研究表明,参数高效微调(PEFT)技术,如视觉提示调谐和视觉适配器调谐,在少量OOD检测中优于传统方法,如完全微调和线性探测调谐。考虑到预训练模型中一些有利于OOD检测的有价值信息可能在微调过程中丢失,我们利用预训练模型中的特征来缓解这一问题。具体来说,我们首先提出了一种不需要训练的方法,称为不确定性评分集成(USE)。该方法集成了特征匹配分数,增强了现有的OOD检测方法,显著缩小了传统微调与PEFT技术之间的差距。然而,由于该方法不需要训练,因此无法提高分布精度。为此,我们进一步提出了一种称为Domain-Specific and General Knowledge Fusion (DSGF)的方法,以提高不同微调范式下的少射OOD检测性能和ID精度。实验结果表明,DSGF在不同的微调策略、镜头设置和OOD检测方法中都增强了少镜头OOD检测。我们相信我们的工作可以为研究界提供一条利用大规模视觉预训练模型来解决FS-OOD检测的新途径。代码将被发布。