{"title":"AFMeta:具有时间加权聚合的异步联邦元学习","authors":"Sheng Liu, Haohao Qu, Qiyang Chen, Weitao Jian, Rui Liu, Linlin You","doi":"10.1109/SmartWorld-UIC-ATC-ScalCom-DigitalTwin-PriComp-Metaverse56740.2022.00100","DOIUrl":null,"url":null,"abstract":"The ever-increasing concerns on data security and user privacy have significantly impacted the current centralized mechanism of intelligent systems in bridging private data islands and idle computing resources commonly dispersed at the edge. To resolve that, a novel distributed learning paradigm, called Federated Learning (FL), which can learn a global model in a collaborative and privacy-preserving manner, has been proposed and widely discussed. Furthermore, to tackle the data heterogeneity and model adaptation issues faced by FL, meta-learning starts to be applied together with FL to rapidly train a global model with high generalization. However, since federated meta-learning is still in its infancy to collaborate with participants in synchronous mode, straggler and over-fitting issues may impede its application in ubiquitous intelligence, such as smart health and intelligent transportation. Motivated by this, this paper proposes a novel asynchronous federated meta-learning mechanism, called AFMeta, that can measure the staleness of local models to enhance model aggregation. To the best of our knowledge, AFMeta is the first work studying the asynchronous mode in federated meta-learning. We evaluate AFMeta against state-of-the-art baselines on classification and regression tasks. The results show that it boosts the model performance by 44.23% and reduces the learning time by 86.35%.","PeriodicalId":43791,"journal":{"name":"Scalable Computing-Practice and Experience","volume":"18 1","pages":"641-648"},"PeriodicalIF":0.9000,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"AFMeta: Asynchronous Federated Meta-learning with Temporally Weighted Aggregation\",\"authors\":\"Sheng Liu, Haohao Qu, Qiyang Chen, Weitao Jian, Rui Liu, Linlin You\",\"doi\":\"10.1109/SmartWorld-UIC-ATC-ScalCom-DigitalTwin-PriComp-Metaverse56740.2022.00100\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The ever-increasing concerns on data security and user privacy have significantly impacted the current centralized mechanism of intelligent systems in bridging private data islands and idle computing resources commonly dispersed at the edge. To resolve that, a novel distributed learning paradigm, called Federated Learning (FL), which can learn a global model in a collaborative and privacy-preserving manner, has been proposed and widely discussed. Furthermore, to tackle the data heterogeneity and model adaptation issues faced by FL, meta-learning starts to be applied together with FL to rapidly train a global model with high generalization. However, since federated meta-learning is still in its infancy to collaborate with participants in synchronous mode, straggler and over-fitting issues may impede its application in ubiquitous intelligence, such as smart health and intelligent transportation. Motivated by this, this paper proposes a novel asynchronous federated meta-learning mechanism, called AFMeta, that can measure the staleness of local models to enhance model aggregation. To the best of our knowledge, AFMeta is the first work studying the asynchronous mode in federated meta-learning. We evaluate AFMeta against state-of-the-art baselines on classification and regression tasks. The results show that it boosts the model performance by 44.23% and reduces the learning time by 86.35%.\",\"PeriodicalId\":43791,\"journal\":{\"name\":\"Scalable Computing-Practice and Experience\",\"volume\":\"18 1\",\"pages\":\"641-648\"},\"PeriodicalIF\":0.9000,\"publicationDate\":\"2022-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Scalable Computing-Practice and Experience\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SmartWorld-UIC-ATC-ScalCom-DigitalTwin-PriComp-Metaverse56740.2022.00100\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"COMPUTER SCIENCE, SOFTWARE ENGINEERING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Scalable Computing-Practice and Experience","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SmartWorld-UIC-ATC-ScalCom-DigitalTwin-PriComp-Metaverse56740.2022.00100","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
AFMeta: Asynchronous Federated Meta-learning with Temporally Weighted Aggregation
The ever-increasing concerns on data security and user privacy have significantly impacted the current centralized mechanism of intelligent systems in bridging private data islands and idle computing resources commonly dispersed at the edge. To resolve that, a novel distributed learning paradigm, called Federated Learning (FL), which can learn a global model in a collaborative and privacy-preserving manner, has been proposed and widely discussed. Furthermore, to tackle the data heterogeneity and model adaptation issues faced by FL, meta-learning starts to be applied together with FL to rapidly train a global model with high generalization. However, since federated meta-learning is still in its infancy to collaborate with participants in synchronous mode, straggler and over-fitting issues may impede its application in ubiquitous intelligence, such as smart health and intelligent transportation. Motivated by this, this paper proposes a novel asynchronous federated meta-learning mechanism, called AFMeta, that can measure the staleness of local models to enhance model aggregation. To the best of our knowledge, AFMeta is the first work studying the asynchronous mode in federated meta-learning. We evaluate AFMeta against state-of-the-art baselines on classification and regression tasks. The results show that it boosts the model performance by 44.23% and reduces the learning time by 86.35%.
期刊介绍:
The area of scalable computing has matured and reached a point where new issues and trends require a professional forum. SCPE will provide this avenue by publishing original refereed papers that address the present as well as the future of parallel and distributed computing. The journal will focus on algorithm development, implementation and execution on real-world parallel architectures, and application of parallel and distributed computing to the solution of real-life problems.