{"title":"基于SignSGD的多任务联邦边缘学习","authors":"Sawan Singh Mahara, M. Shruti, B. Bharath","doi":"10.1109/NCC55593.2022.9806778","DOIUrl":null,"url":null,"abstract":"The paper proposes a novel Federated Learning (FL) algorithm involving signed gradient as feedback to reduce communication overhead. The Multi-task nature of the algorithm provides each device a custom neural network after completion. Towards improving the performance, a weighted average loss across devices is proposed which considers the similarity between their data distributions. A Probably Approximately Correct (PAC) bound on the true loss in terms of the proposed empirical loss is derived. The bound is in terms of (i) Rademacher complexity, (ii) discrepancy, and (iii) penalty term. A distributed algorithm is proposed to find the discrepancy as well as the fine tuned neural network at each node. It is experimentally shown that this proposed method outperforms existing algorithms such as FedSGD, DITTO, FedAvg and locally trained neural network with good generalization on various data sets.","PeriodicalId":403870,"journal":{"name":"2022 National Conference on Communications (NCC)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Multi-Task Federated Edge Learning (MTFeeL) With SignSGD\",\"authors\":\"Sawan Singh Mahara, M. Shruti, B. Bharath\",\"doi\":\"10.1109/NCC55593.2022.9806778\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The paper proposes a novel Federated Learning (FL) algorithm involving signed gradient as feedback to reduce communication overhead. The Multi-task nature of the algorithm provides each device a custom neural network after completion. Towards improving the performance, a weighted average loss across devices is proposed which considers the similarity between their data distributions. A Probably Approximately Correct (PAC) bound on the true loss in terms of the proposed empirical loss is derived. The bound is in terms of (i) Rademacher complexity, (ii) discrepancy, and (iii) penalty term. A distributed algorithm is proposed to find the discrepancy as well as the fine tuned neural network at each node. It is experimentally shown that this proposed method outperforms existing algorithms such as FedSGD, DITTO, FedAvg and locally trained neural network with good generalization on various data sets.\",\"PeriodicalId\":403870,\"journal\":{\"name\":\"2022 National Conference on Communications (NCC)\",\"volume\":\"28 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-05-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 National Conference on Communications (NCC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/NCC55593.2022.9806778\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 National Conference on Communications (NCC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/NCC55593.2022.9806778","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Multi-Task Federated Edge Learning (MTFeeL) With SignSGD
The paper proposes a novel Federated Learning (FL) algorithm involving signed gradient as feedback to reduce communication overhead. The Multi-task nature of the algorithm provides each device a custom neural network after completion. Towards improving the performance, a weighted average loss across devices is proposed which considers the similarity between their data distributions. A Probably Approximately Correct (PAC) bound on the true loss in terms of the proposed empirical loss is derived. The bound is in terms of (i) Rademacher complexity, (ii) discrepancy, and (iii) penalty term. A distributed algorithm is proposed to find the discrepancy as well as the fine tuned neural network at each node. It is experimentally shown that this proposed method outperforms existing algorithms such as FedSGD, DITTO, FedAvg and locally trained neural network with good generalization on various data sets.