{"title":"半监督少镜头学习的凸Kullback-Leibler散度和临界描述子原型","authors":"Yukun Liu, Daming Shi","doi":"10.1007/s10489-025-06239-1","DOIUrl":null,"url":null,"abstract":"<div><p>Few-shot learning has achieved great success in recent years, thanks to its requirement of limited number of labeled data. However, most of the state-of-the-art techniques of few-shot learning employ transfer learning, which still requires massive labeled data to train. To simulate the human learning mechanism, a deep model of few-shot learning is proposed to learn from one, or a few examples. First of all in this paper, we analyze and note that the problem with representative semi-supervised few-shot learning methods is getting stuck in local optimization and prototype bias problems. To address these challenges, we propose a new semi-supervised few-shot learning method with Convex Kullback-Leibler and critical descriptor prototypes, hereafter referred to as CKL. Specifically, CKL optimizes joint probability density via KL divergence, subsequently deriving a strictly convex function to facilitate global optimization in semi-supervised clustering. In addition, by incorporating dictionary learning, the critical descriptor facilitates the extraction of more prototypical features, thereby capturing more distinct feature information and avoiding the problem of prototype bias caused by limited labeled samples. Intensive experiments have been conducted on three popular benchmark datasets, and the experimental results show that this method significantly improves the classification ability of few-shot learning and obtains the most advanced performance. In the future, we will explore additional methods that can be integrated with deep learning to further uncover essential features within samples.</p></div>","PeriodicalId":8041,"journal":{"name":"Applied Intelligence","volume":"55 5","pages":""},"PeriodicalIF":3.4000,"publicationDate":"2025-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A convex Kullback-Leibler divergence and critical-descriptor prototypes for semi-supervised few-shot learning\",\"authors\":\"Yukun Liu, Daming Shi\",\"doi\":\"10.1007/s10489-025-06239-1\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Few-shot learning has achieved great success in recent years, thanks to its requirement of limited number of labeled data. However, most of the state-of-the-art techniques of few-shot learning employ transfer learning, which still requires massive labeled data to train. To simulate the human learning mechanism, a deep model of few-shot learning is proposed to learn from one, or a few examples. First of all in this paper, we analyze and note that the problem with representative semi-supervised few-shot learning methods is getting stuck in local optimization and prototype bias problems. To address these challenges, we propose a new semi-supervised few-shot learning method with Convex Kullback-Leibler and critical descriptor prototypes, hereafter referred to as CKL. Specifically, CKL optimizes joint probability density via KL divergence, subsequently deriving a strictly convex function to facilitate global optimization in semi-supervised clustering. In addition, by incorporating dictionary learning, the critical descriptor facilitates the extraction of more prototypical features, thereby capturing more distinct feature information and avoiding the problem of prototype bias caused by limited labeled samples. Intensive experiments have been conducted on three popular benchmark datasets, and the experimental results show that this method significantly improves the classification ability of few-shot learning and obtains the most advanced performance. In the future, we will explore additional methods that can be integrated with deep learning to further uncover essential features within samples.</p></div>\",\"PeriodicalId\":8041,\"journal\":{\"name\":\"Applied Intelligence\",\"volume\":\"55 5\",\"pages\":\"\"},\"PeriodicalIF\":3.4000,\"publicationDate\":\"2025-01-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Applied Intelligence\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://link.springer.com/article/10.1007/s10489-025-06239-1\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied Intelligence","FirstCategoryId":"94","ListUrlMain":"https://link.springer.com/article/10.1007/s10489-025-06239-1","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
A convex Kullback-Leibler divergence and critical-descriptor prototypes for semi-supervised few-shot learning
Few-shot learning has achieved great success in recent years, thanks to its requirement of limited number of labeled data. However, most of the state-of-the-art techniques of few-shot learning employ transfer learning, which still requires massive labeled data to train. To simulate the human learning mechanism, a deep model of few-shot learning is proposed to learn from one, or a few examples. First of all in this paper, we analyze and note that the problem with representative semi-supervised few-shot learning methods is getting stuck in local optimization and prototype bias problems. To address these challenges, we propose a new semi-supervised few-shot learning method with Convex Kullback-Leibler and critical descriptor prototypes, hereafter referred to as CKL. Specifically, CKL optimizes joint probability density via KL divergence, subsequently deriving a strictly convex function to facilitate global optimization in semi-supervised clustering. In addition, by incorporating dictionary learning, the critical descriptor facilitates the extraction of more prototypical features, thereby capturing more distinct feature information and avoiding the problem of prototype bias caused by limited labeled samples. Intensive experiments have been conducted on three popular benchmark datasets, and the experimental results show that this method significantly improves the classification ability of few-shot learning and obtains the most advanced performance. In the future, we will explore additional methods that can be integrated with deep learning to further uncover essential features within samples.
期刊介绍:
With a focus on research in artificial intelligence and neural networks, this journal addresses issues involving solutions of real-life manufacturing, defense, management, government and industrial problems which are too complex to be solved through conventional approaches and require the simulation of intelligent thought processes, heuristics, applications of knowledge, and distributed and parallel processing. The integration of these multiple approaches in solving complex problems is of particular importance.
The journal presents new and original research and technological developments, addressing real and complex issues applicable to difficult problems. It provides a medium for exchanging scientific research and technological achievements accomplished by the international community.