集成嵌入式元学习

WISE Pub Date : 2022-06-18 DOI:10.48550/arXiv.2206.09195
Geng Li, Boyuan Ren, Hongzhi Wang
{"title":"集成嵌入式元学习","authors":"Geng Li, Boyuan Ren, Hongzhi Wang","doi":"10.48550/arXiv.2206.09195","DOIUrl":null,"url":null,"abstract":". To accelerate learning process with few samples, meta-learning resorts to prior knowledge from previous tasks. However, the inconsis-tent task distribution and heterogeneity is hard to be handled through a global sharing model initialization. In this paper, based on gradient-based meta-learning, we propose an ensemble embedded meta-learning algorithm (EEML) that explicitly utilizes multi-model-ensemble to organize prior knowledge into diverse specific experts. We rely on a task embedding cluster mechanism to deliver diverse tasks to matching experts in training process and instruct how experts collaborate in test phase. As a result, the multi experts can focus on their own area of ex-pertise and cooperate in upcoming task to solve the task heterogeneity. The experimental results show that the proposed method outperforms recent state-of-the-arts easily in few-shot learning problem, which validates the importance of differentiation and cooperation.","PeriodicalId":424892,"journal":{"name":"WISE","volume":"177 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"EEML: Ensemble Embedded Meta-learning\",\"authors\":\"Geng Li, Boyuan Ren, Hongzhi Wang\",\"doi\":\"10.48550/arXiv.2206.09195\",\"DOIUrl\":null,\"url\":null,\"abstract\":\". To accelerate learning process with few samples, meta-learning resorts to prior knowledge from previous tasks. However, the inconsis-tent task distribution and heterogeneity is hard to be handled through a global sharing model initialization. In this paper, based on gradient-based meta-learning, we propose an ensemble embedded meta-learning algorithm (EEML) that explicitly utilizes multi-model-ensemble to organize prior knowledge into diverse specific experts. We rely on a task embedding cluster mechanism to deliver diverse tasks to matching experts in training process and instruct how experts collaborate in test phase. As a result, the multi experts can focus on their own area of ex-pertise and cooperate in upcoming task to solve the task heterogeneity. The experimental results show that the proposed method outperforms recent state-of-the-arts easily in few-shot learning problem, which validates the importance of differentiation and cooperation.\",\"PeriodicalId\":424892,\"journal\":{\"name\":\"WISE\",\"volume\":\"177 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-06-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"WISE\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.48550/arXiv.2206.09195\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"WISE","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.48550/arXiv.2206.09195","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

摘要

. 为了在样本较少的情况下加速学习过程,元学习利用以前任务的先验知识。然而,全局共享模型初始化难以解决任务分布不一致和异构性问题。本文在基于梯度元学习的基础上,提出了一种集成嵌入式元学习算法(EEML),该算法明确地利用多模型集成将先验知识组织到不同的特定专家中。我们通过任务嵌入聚类机制,在训练过程中向匹配的专家传递不同的任务,并指导专家在测试阶段如何协作。因此,多专家可以专注于自己的专业领域,并在即将到来的任务中进行合作,从而解决任务异质性问题。实验结果表明,该方法在短时学习问题上的性能明显优于目前的研究水平,验证了微分与合作的重要性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
EEML: Ensemble Embedded Meta-learning
. To accelerate learning process with few samples, meta-learning resorts to prior knowledge from previous tasks. However, the inconsis-tent task distribution and heterogeneity is hard to be handled through a global sharing model initialization. In this paper, based on gradient-based meta-learning, we propose an ensemble embedded meta-learning algorithm (EEML) that explicitly utilizes multi-model-ensemble to organize prior knowledge into diverse specific experts. We rely on a task embedding cluster mechanism to deliver diverse tasks to matching experts in training process and instruct how experts collaborate in test phase. As a result, the multi experts can focus on their own area of ex-pertise and cooperate in upcoming task to solve the task heterogeneity. The experimental results show that the proposed method outperforms recent state-of-the-arts easily in few-shot learning problem, which validates the importance of differentiation and cooperation.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Learning to Select the Relevant History Turns in Conversational Question Answering FASTAGEDS: Fast Approximate Graph Entity Dependency Discovery Debias the Black-box: A Fair Ranking Framework via Knowledge Distillation A Multi-Threading Algorithm for Constrained Path Optimization Problem on Road Networks EEML: Ensemble Embedded Meta-learning
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1