Xinyi Chang, Chunyu Du, Xinjing Song, Weifeng Liu, Yanjiang Wang
{"title":"面向目标的动态自适应跨域少量学习","authors":"Xinyi Chang, Chunyu Du, Xinjing Song, Weifeng Liu, Yanjiang Wang","doi":"10.1007/s11063-024-11508-0","DOIUrl":null,"url":null,"abstract":"<p>Few-shot learning has achieved satisfactory progress over the years, but these methods implicitly hypothesize that the data in the base (source) classes and novel (target) classes are sampled from the same data distribution (domain), which is often invalid in reality. The purpose of cross-domain few-shot learning (CD-FSL) is to successfully identify novel target classes with a small quantity of labeled instances on the target domain under the circumstance of domain shift between the source domain and the target domain. However, in CD-FSL, the knowledge learned by the network on the source domain often suffers from the situation of inadaptation when it is transferred to the target domain, since the instances on the source and target domains do not obey the same data distribution. To surmount this problem, we propose a Target Oriented Dynamic Adaption (TODA) model, which uses a tiny amount of target data to orient the network to dynamically adjust and adapt during training. Specifically, this work proposes a domain-specific adapter to ameliorate the network inadaptability issues in transfer to the target domain. The domain-specific adapter can make the extracted features more specific to the tasks in the target domain and reduce the impact of tasks in the source domain by combining them with the mainstream backbone network. In addition, we propose an adaptive optimization method in the network optimization process, which assigns different weights according to the importance of different optimization tasks. Extensive experiments on several benchmark datasets demonstrate the effectiveness of our TODA method.</p>","PeriodicalId":51144,"journal":{"name":"Neural Processing Letters","volume":"20 1","pages":""},"PeriodicalIF":2.6000,"publicationDate":"2024-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Target Oriented Dynamic Adaption for Cross-Domain Few-Shot Learning\",\"authors\":\"Xinyi Chang, Chunyu Du, Xinjing Song, Weifeng Liu, Yanjiang Wang\",\"doi\":\"10.1007/s11063-024-11508-0\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Few-shot learning has achieved satisfactory progress over the years, but these methods implicitly hypothesize that the data in the base (source) classes and novel (target) classes are sampled from the same data distribution (domain), which is often invalid in reality. The purpose of cross-domain few-shot learning (CD-FSL) is to successfully identify novel target classes with a small quantity of labeled instances on the target domain under the circumstance of domain shift between the source domain and the target domain. However, in CD-FSL, the knowledge learned by the network on the source domain often suffers from the situation of inadaptation when it is transferred to the target domain, since the instances on the source and target domains do not obey the same data distribution. To surmount this problem, we propose a Target Oriented Dynamic Adaption (TODA) model, which uses a tiny amount of target data to orient the network to dynamically adjust and adapt during training. Specifically, this work proposes a domain-specific adapter to ameliorate the network inadaptability issues in transfer to the target domain. The domain-specific adapter can make the extracted features more specific to the tasks in the target domain and reduce the impact of tasks in the source domain by combining them with the mainstream backbone network. In addition, we propose an adaptive optimization method in the network optimization process, which assigns different weights according to the importance of different optimization tasks. Extensive experiments on several benchmark datasets demonstrate the effectiveness of our TODA method.</p>\",\"PeriodicalId\":51144,\"journal\":{\"name\":\"Neural Processing Letters\",\"volume\":\"20 1\",\"pages\":\"\"},\"PeriodicalIF\":2.6000,\"publicationDate\":\"2024-04-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Neural Processing Letters\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1007/s11063-024-11508-0\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Processing Letters","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s11063-024-11508-0","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
摘要
多年来,少点学习取得了令人满意的进展,但这些方法隐含的假设是,基础(源)类和新颖(目标)类中的数据是从相同的数据分布(域)中采样的,而这在现实中往往是无效的。跨域少量学习(CD-FSL)的目的是在源域和目标域之间发生域转移的情况下,利用目标域上的少量标注实例成功识别新目标类。然而,在 CD-FSL 中,由于源域和目标域上的实例并不服从相同的数据分布,网络在源域上学习到的知识在转移到目标域时往往会出现不适应的情况。为了解决这个问题,我们提出了目标导向动态自适应(TODA)模型,该模型在训练过程中使用极少量的目标数据来引导网络进行动态调整和自适应。具体来说,这项工作提出了一个特定领域适配器,以改善网络在转移到目标领域时的不适应问题。特定领域适配器可以使提取的特征更加针对目标领域的任务,并通过与主流骨干网络相结合来减少源领域任务的影响。此外,我们还在网络优化过程中提出了自适应优化方法,根据不同优化任务的重要性分配不同权重。在多个基准数据集上的广泛实验证明了我们的 TODA 方法的有效性。
Target Oriented Dynamic Adaption for Cross-Domain Few-Shot Learning
Few-shot learning has achieved satisfactory progress over the years, but these methods implicitly hypothesize that the data in the base (source) classes and novel (target) classes are sampled from the same data distribution (domain), which is often invalid in reality. The purpose of cross-domain few-shot learning (CD-FSL) is to successfully identify novel target classes with a small quantity of labeled instances on the target domain under the circumstance of domain shift between the source domain and the target domain. However, in CD-FSL, the knowledge learned by the network on the source domain often suffers from the situation of inadaptation when it is transferred to the target domain, since the instances on the source and target domains do not obey the same data distribution. To surmount this problem, we propose a Target Oriented Dynamic Adaption (TODA) model, which uses a tiny amount of target data to orient the network to dynamically adjust and adapt during training. Specifically, this work proposes a domain-specific adapter to ameliorate the network inadaptability issues in transfer to the target domain. The domain-specific adapter can make the extracted features more specific to the tasks in the target domain and reduce the impact of tasks in the source domain by combining them with the mainstream backbone network. In addition, we propose an adaptive optimization method in the network optimization process, which assigns different weights according to the importance of different optimization tasks. Extensive experiments on several benchmark datasets demonstrate the effectiveness of our TODA method.
期刊介绍:
Neural Processing Letters is an international journal publishing research results and innovative ideas on all aspects of artificial neural networks. Coverage includes theoretical developments, biological models, new formal modes, learning, applications, software and hardware developments, and prospective researches.
The journal promotes fast exchange of information in the community of neural network researchers and users. The resurgence of interest in the field of artificial neural networks since the beginning of the 1980s is coupled to tremendous research activity in specialized or multidisciplinary groups. Research, however, is not possible without good communication between people and the exchange of information, especially in a field covering such different areas; fast communication is also a key aspect, and this is the reason for Neural Processing Letters