Panlong Wu;Kangshuo Li;Ting Wang;Yanjie Dong;Victor C. M. Leung;Fangxin Wang
{"title":"FedFMSL:利用稀疏激活的 LoRA 联合学习基础模型","authors":"Panlong Wu;Kangshuo Li;Ting Wang;Yanjie Dong;Victor C. M. Leung;Fangxin Wang","doi":"10.1109/TMC.2024.3454634","DOIUrl":null,"url":null,"abstract":"Foundation models (FMs) have shown great success in natural language processing, computer vision, and multimodal tasks. FMs have a large number of model parameters, thus requiring a substantial amount of data to help optimize the model during the training. Federated learning has revolutionized machine learning by enabling collaborative learning from decentralized data while still preserving clients’ data privacy. Despite the great benefits foundation models can have empowered by federated learning, their bulky model parameters cause severe communication challenges for modern networks and computation challenges especially for edge devices. Moreover, the data distribution of different clients can be different thus inducing statistical challenges. In this paper, we propose a novel two-stage federated learning algorithm called FedFMSL. A global expert is trained in the first stage and a local expert is trained in the second stage to provide better personalization. We construct a Mixture of Foundation Models (\n<monospace>MoFM</monospace>\n) with these two experts and design a gate neural network with an inserted gate adapter that joins the aggregation every communication round in the second stage. To further adapt to edge computing scenarios with limited computational resources, we design a novel Sparsely Activated LoRA (\n<monospace>SAL</monospace>\n) algorithm that freezes the pre-trained foundation model parameters inserts low-rank adaptation matrices into transformer blocks, and activates them progressively during the training. We employ extensive experiments to verify the effectiveness of FedFMSL, results show that FedFMSL outperforms other SOTA baselines by up to 59.19% in default settings while tuning less than 0.3% parameters of the foundation model.","PeriodicalId":50389,"journal":{"name":"IEEE Transactions on Mobile Computing","volume":"23 12","pages":"15167-15181"},"PeriodicalIF":7.7000,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"FedFMSL: Federated Learning of Foundation Models With Sparsely Activated LoRA\",\"authors\":\"Panlong Wu;Kangshuo Li;Ting Wang;Yanjie Dong;Victor C. M. Leung;Fangxin Wang\",\"doi\":\"10.1109/TMC.2024.3454634\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Foundation models (FMs) have shown great success in natural language processing, computer vision, and multimodal tasks. FMs have a large number of model parameters, thus requiring a substantial amount of data to help optimize the model during the training. Federated learning has revolutionized machine learning by enabling collaborative learning from decentralized data while still preserving clients’ data privacy. Despite the great benefits foundation models can have empowered by federated learning, their bulky model parameters cause severe communication challenges for modern networks and computation challenges especially for edge devices. Moreover, the data distribution of different clients can be different thus inducing statistical challenges. In this paper, we propose a novel two-stage federated learning algorithm called FedFMSL. A global expert is trained in the first stage and a local expert is trained in the second stage to provide better personalization. We construct a Mixture of Foundation Models (\\n<monospace>MoFM</monospace>\\n) with these two experts and design a gate neural network with an inserted gate adapter that joins the aggregation every communication round in the second stage. To further adapt to edge computing scenarios with limited computational resources, we design a novel Sparsely Activated LoRA (\\n<monospace>SAL</monospace>\\n) algorithm that freezes the pre-trained foundation model parameters inserts low-rank adaptation matrices into transformer blocks, and activates them progressively during the training. We employ extensive experiments to verify the effectiveness of FedFMSL, results show that FedFMSL outperforms other SOTA baselines by up to 59.19% in default settings while tuning less than 0.3% parameters of the foundation model.\",\"PeriodicalId\":50389,\"journal\":{\"name\":\"IEEE Transactions on Mobile Computing\",\"volume\":\"23 12\",\"pages\":\"15167-15181\"},\"PeriodicalIF\":7.7000,\"publicationDate\":\"2024-09-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Mobile Computing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10666083/\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Mobile Computing","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10666083/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
FedFMSL: Federated Learning of Foundation Models With Sparsely Activated LoRA
Foundation models (FMs) have shown great success in natural language processing, computer vision, and multimodal tasks. FMs have a large number of model parameters, thus requiring a substantial amount of data to help optimize the model during the training. Federated learning has revolutionized machine learning by enabling collaborative learning from decentralized data while still preserving clients’ data privacy. Despite the great benefits foundation models can have empowered by federated learning, their bulky model parameters cause severe communication challenges for modern networks and computation challenges especially for edge devices. Moreover, the data distribution of different clients can be different thus inducing statistical challenges. In this paper, we propose a novel two-stage federated learning algorithm called FedFMSL. A global expert is trained in the first stage and a local expert is trained in the second stage to provide better personalization. We construct a Mixture of Foundation Models (
MoFM
) with these two experts and design a gate neural network with an inserted gate adapter that joins the aggregation every communication round in the second stage. To further adapt to edge computing scenarios with limited computational resources, we design a novel Sparsely Activated LoRA (
SAL
) algorithm that freezes the pre-trained foundation model parameters inserts low-rank adaptation matrices into transformer blocks, and activates them progressively during the training. We employ extensive experiments to verify the effectiveness of FedFMSL, results show that FedFMSL outperforms other SOTA baselines by up to 59.19% in default settings while tuning less than 0.3% parameters of the foundation model.
期刊介绍:
IEEE Transactions on Mobile Computing addresses key technical issues related to various aspects of mobile computing. This includes (a) architectures, (b) support services, (c) algorithm/protocol design and analysis, (d) mobile environments, (e) mobile communication systems, (f) applications, and (g) emerging technologies. Topics of interest span a wide range, covering aspects like mobile networks and hosts, mobility management, multimedia, operating system support, power management, online and mobile environments, security, scalability, reliability, and emerging technologies such as wearable computers, body area networks, and wireless sensor networks. The journal serves as a comprehensive platform for advancements in mobile computing research.