Few-shot learning (FSL) aims to achieve efficient classification with limited labeled samples, providing an important research paradigm for addressing the model generalization issue in data-scarce scenarios. In the metric-based FSL framework, class prototypes serve as the core transferable representation of classes, and their discriminative power directly impacts the model’s classification performance. However, existing methods face two major bottlenecks: first, traditional feature selection mechanisms use static modeling approaches that are susceptible to background noise and struggle to capture dynamic relationships between classes; second, due to limitations in the quantity and quality of labeled samples, prototype representations based on global features lack fine-grained expression of local discriminative features, limiting the prototype’s representational power. To overcome these limitations, we propose a novel framework: Learning Discriminative Prototypes (LDP). LDP includes two modules: (1) Adaptive relation-aware refinement, which dynamically models the relationships between class prototypes, highlighting the key features of each class and effectively enhancing the robustness of feature representations; (2) Patch-level contextual feature reweighting, which performs a reweighting operation on the samples through patch-level feature interactions thereby obtaining a more discriminative prototype. Experimental results demonstrate that LDP shows strong competitiveness on five datasets covering both standard and cross-domain datasets, validating its effectiveness in FSL tasks. For example, in the 1-shot setting on miniImageNet and tieredImageNet, LDP achieves over 12% accuracy improvement compared with the baseline methods; on the cross-domain dataset CUB200, the improvement reaches 6.45% in the 1-shot case. Our code is available on GitHub at https://github.com/fewshot-learner/LDP.
扫码关注我们
求助内容:
应助结果提醒方式:
