A Joint Time and Energy-Efficient Federated Learning-based Computation Offloading Method for Mobile Edge Computing

Anwesha Mukherjee, Rajkumar Buyya
{"title":"A Joint Time and Energy-Efficient Federated Learning-based Computation Offloading Method for Mobile Edge Computing","authors":"Anwesha Mukherjee, Rajkumar Buyya","doi":"arxiv-2409.02548","DOIUrl":null,"url":null,"abstract":"Computation offloading at lower time and lower energy consumption is crucial\nfor resource limited mobile devices. This paper proposes an offloading\ndecision-making model using federated learning. Based on the task type and the\nuser input, the proposed decision-making model predicts whether the task is\ncomputationally intensive or not. If the predicted result is computationally\nintensive, then based on the network parameters the proposed decision-making\nmodel predicts whether to offload or locally execute the task. According to the\npredicted result the task is either locally executed or offloaded to the edge\nserver. The proposed method is implemented in a real-time environment, and the\nexperimental results show that the proposed method has achieved above 90%\nprediction accuracy in offloading decision-making. The experimental results\nalso present that the proposed offloading method reduces the response time and\nenergy consumption of the user device by ~11-31% for computationally intensive\ntasks. A partial computation offloading method for federated learning is also\nproposed and implemented in this paper, where the devices which are unable to\nanalyse the huge number of data samples, offload a part of their local datasets\nto the edge server. For secure data transmission, cryptography is used. The\nexperimental results present that using encryption and decryption the total\ntime is increased by only 0.05-0.16%. The results also present that the\nproposed partial computation offloading method for federated learning has\nachieved a prediction accuracy of above 98% for the global model.","PeriodicalId":501422,"journal":{"name":"arXiv - CS - Distributed, Parallel, and Cluster Computing","volume":"14 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Distributed, Parallel, and Cluster Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.02548","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Computation offloading at lower time and lower energy consumption is crucial for resource limited mobile devices. This paper proposes an offloading decision-making model using federated learning. Based on the task type and the user input, the proposed decision-making model predicts whether the task is computationally intensive or not. If the predicted result is computationally intensive, then based on the network parameters the proposed decision-making model predicts whether to offload or locally execute the task. According to the predicted result the task is either locally executed or offloaded to the edge server. The proposed method is implemented in a real-time environment, and the experimental results show that the proposed method has achieved above 90% prediction accuracy in offloading decision-making. The experimental results also present that the proposed offloading method reduces the response time and energy consumption of the user device by ~11-31% for computationally intensive tasks. A partial computation offloading method for federated learning is also proposed and implemented in this paper, where the devices which are unable to analyse the huge number of data samples, offload a part of their local datasets to the edge server. For secure data transmission, cryptography is used. The experimental results present that using encryption and decryption the total time is increased by only 0.05-0.16%. The results also present that the proposed partial computation offloading method for federated learning has achieved a prediction accuracy of above 98% for the global model.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
为移动边缘计算提供基于联合学习的时间和能效计算卸载方法
对于资源有限的移动设备来说,以更短的时间和更低的能耗卸载计算至关重要。本文提出了一种使用联合学习的卸载决策模型。根据任务类型和用户输入,所提出的决策模型会预测任务是否属于计算密集型。如果预测结果是计算密集型的,那么根据网络参数,提议的决策模型就会预测是卸载任务还是本地执行任务。根据预测结果,任务要么在本地执行,要么卸载到边缘服务器。实验结果表明,所提出的方法在卸载决策中的预测准确率达到了 90% 以上。实验结果还表明,对于计算密集型任务,所提出的卸载方法将用户设备的响应时间和能耗降低了约11%-31%。本文还提出并实现了联盟学习的部分计算卸载方法,即无法分析大量数据样本的设备将其本地数据集的一部分卸载到边缘服务器。为了保证数据传输的安全性,本文使用了加密技术。实验结果表明,使用加密和解密技术,总时间只增加了 0.05-0.16%。实验结果还表明,针对联合学习提出的部分计算卸载方法使全局模型的预测准确率达到了 98% 以上。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Massively parallel CMA-ES with increasing population Communication Lower Bounds and Optimal Algorithms for Symmetric Matrix Computations Energy Efficiency Support for Software Defined Networks: a Serverless Computing Approach CountChain: A Decentralized Oracle Network for Counting Systems Delay Analysis of EIP-4844
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1