Lin Ding, Wenfeng Shen, Weijia Lu, Peng Liu, Shengbo Chen, Sisi Chen
{"title":"Gradient-Based Meta-Learning Using Adaptive Multiple Loss Weighting and Homoscedastic Uncertainty","authors":"Lin Ding, Wenfeng Shen, Weijia Lu, Peng Liu, Shengbo Chen, Sisi Chen","doi":"10.1109/ICCECE58074.2023.10135472","DOIUrl":null,"url":null,"abstract":"Model-agnostic meta-learning schemes adopt gradient descent to learn task commonalities and obtain the initialization parameters of the meta-model to rapidly adjust to new tasks with only a few training samples. Therefore, such schemes have become the mainstream meta-learning approach for studying few shot learning problems. This study mainly addresses the challenge of task uncertainty in few-shot learning and proposes an improved meta-learning approach, which first enables a task specific learner to select the initial parameter that minimize the loss of a new task, then generates weights by comparing meta-loss differences, and finally leads into the homoscedastic uncertainty of the task to weight the diverse losses. Our model conducts superior on few shot learning task than previous meta learning approach and improves its robustness regardless of the initial learning rates and query sets.","PeriodicalId":120030,"journal":{"name":"2023 3rd International Conference on Consumer Electronics and Computer Engineering (ICCECE)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2023-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 3rd International Conference on Consumer Electronics and Computer Engineering (ICCECE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCECE58074.2023.10135472","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Model-agnostic meta-learning schemes adopt gradient descent to learn task commonalities and obtain the initialization parameters of the meta-model to rapidly adjust to new tasks with only a few training samples. Therefore, such schemes have become the mainstream meta-learning approach for studying few shot learning problems. This study mainly addresses the challenge of task uncertainty in few-shot learning and proposes an improved meta-learning approach, which first enables a task specific learner to select the initial parameter that minimize the loss of a new task, then generates weights by comparing meta-loss differences, and finally leads into the homoscedastic uncertainty of the task to weight the diverse losses. Our model conducts superior on few shot learning task than previous meta learning approach and improves its robustness regardless of the initial learning rates and query sets.