S. Kawa Atapour, S. Jamal SeyedMohammadi, S. Mohammad Sheikholeslami, Jamshid Abouei, Konstantinos N. Plataniotis, Arash Mohammadi
{"title":"Leveraging Foundation Models for Efficient Federated Learning in Resource-restricted Edge Networks","authors":"S. Kawa Atapour, S. Jamal SeyedMohammadi, S. Mohammad Sheikholeslami, Jamshid Abouei, Konstantinos N. Plataniotis, Arash Mohammadi","doi":"arxiv-2409.09273","DOIUrl":null,"url":null,"abstract":"Recently pre-trained Foundation Models (FMs) have been combined with\nFederated Learning (FL) to improve training of downstream tasks while\npreserving privacy. However, deploying FMs over edge networks with\nresource-constrained Internet of Things (IoT) devices is under-explored. This\npaper proposes a novel framework, namely, Federated Distilling knowledge to\nPrompt (FedD2P), for leveraging the robust representation abilities of a\nvision-language FM without deploying it locally on edge devices. This framework\ndistills the aggregated knowledge of IoT devices to a prompt generator to\nefficiently adapt the frozen FM for downstream tasks. To eliminate the\ndependency on a public dataset, our framework leverages perclass local\nknowledge from IoT devices and linguistic descriptions of classes to train the\nprompt generator. Our experiments on diverse image classification datasets\nCIFAR, OxfordPets, SVHN, EuroSAT, and DTD show that FedD2P outperforms the\nbaselines in terms of model performance.","PeriodicalId":501422,"journal":{"name":"arXiv - CS - Distributed, Parallel, and Cluster Computing","volume":"54 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Distributed, Parallel, and Cluster Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.09273","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Recently pre-trained Foundation Models (FMs) have been combined with
Federated Learning (FL) to improve training of downstream tasks while
preserving privacy. However, deploying FMs over edge networks with
resource-constrained Internet of Things (IoT) devices is under-explored. This
paper proposes a novel framework, namely, Federated Distilling knowledge to
Prompt (FedD2P), for leveraging the robust representation abilities of a
vision-language FM without deploying it locally on edge devices. This framework
distills the aggregated knowledge of IoT devices to a prompt generator to
efficiently adapt the frozen FM for downstream tasks. To eliminate the
dependency on a public dataset, our framework leverages perclass local
knowledge from IoT devices and linguistic descriptions of classes to train the
prompt generator. Our experiments on diverse image classification datasets
CIFAR, OxfordPets, SVHN, EuroSAT, and DTD show that FedD2P outperforms the
baselines in terms of model performance.