端缘云分层联邦学习中大数据交易的激励机制

Yunfeng Zhao, Zhicheng Liu, Chao Qiu, Xiaofei Wang, F. Yu, Victor C. M. Leung
{"title":"端缘云分层联邦学习中大数据交易的激励机制","authors":"Yunfeng Zhao, Zhicheng Liu, Chao Qiu, Xiaofei Wang, F. Yu, Victor C. M. Leung","doi":"10.1109/GLOBECOM46510.2021.9685514","DOIUrl":null,"url":null,"abstract":"As a compelling collaborative machine learning framework in the big data era, federated learning allows multiple participants to jointly train a model without revealing their private data. To further leverage the ubiquitous resources in end-edge-cloud systems, hierarchical federated learning (HFL) focuses on the layered feature to relieve the excessive communication overhead and the risk of data leakage. For end devices are often considered as self-interested and reluctant to join in model training, encouraging them to participate becomes an emerging and challenging issue, which deeply impacts training performance and has not been well considered yet. This paper proposes an incentive mechanism for HFL in end-edge-cloud systems, which motivates end devices to contribute data for model training. The hierarchical training process in end-edge-cloud systems is modeled as a multi-layer Stackelberg game where sub-games are interconnected through the utility functions. We derive the Nash equilibrium strategies and closed-form solutions to guide players. Due to fully grasping the inner interest relationship among players, the proposed mechanism could exchange the low costs for the high model performance. Simulations demonstrate the effectiveness of the proposed mechanism and reveal stakeholder's dependencies on the allocation of data resources.","PeriodicalId":200641,"journal":{"name":"2021 IEEE Global Communications Conference (GLOBECOM)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":"{\"title\":\"An Incentive Mechanism for Big Data Trading in End-Edge-Cloud Hierarchical Federated Learning\",\"authors\":\"Yunfeng Zhao, Zhicheng Liu, Chao Qiu, Xiaofei Wang, F. Yu, Victor C. M. Leung\",\"doi\":\"10.1109/GLOBECOM46510.2021.9685514\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"As a compelling collaborative machine learning framework in the big data era, federated learning allows multiple participants to jointly train a model without revealing their private data. To further leverage the ubiquitous resources in end-edge-cloud systems, hierarchical federated learning (HFL) focuses on the layered feature to relieve the excessive communication overhead and the risk of data leakage. For end devices are often considered as self-interested and reluctant to join in model training, encouraging them to participate becomes an emerging and challenging issue, which deeply impacts training performance and has not been well considered yet. This paper proposes an incentive mechanism for HFL in end-edge-cloud systems, which motivates end devices to contribute data for model training. The hierarchical training process in end-edge-cloud systems is modeled as a multi-layer Stackelberg game where sub-games are interconnected through the utility functions. We derive the Nash equilibrium strategies and closed-form solutions to guide players. Due to fully grasping the inner interest relationship among players, the proposed mechanism could exchange the low costs for the high model performance. Simulations demonstrate the effectiveness of the proposed mechanism and reveal stakeholder's dependencies on the allocation of data resources.\",\"PeriodicalId\":200641,\"journal\":{\"name\":\"2021 IEEE Global Communications Conference (GLOBECOM)\",\"volume\":\"6 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"5\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE Global Communications Conference (GLOBECOM)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/GLOBECOM46510.2021.9685514\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE Global Communications Conference (GLOBECOM)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/GLOBECOM46510.2021.9685514","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5

摘要

作为大数据时代引人注目的协作机器学习框架,联邦学习允许多个参与者在不泄露其私有数据的情况下共同训练模型。为了进一步利用端缘云系统中无处不在的资源,分层联邦学习(HFL)侧重于分层特性,以减轻过多的通信开销和数据泄漏风险。由于终端设备在模型培训中往往被认为是自利的,不愿意参与,鼓励终端设备参与成为一个新兴的、具有挑战性的问题,这对培训绩效产生了深刻的影响,但尚未得到很好的考虑。本文提出了一种端-端云系统中HFL的激励机制,激励终端设备为模型训练提供数据。将端边缘云系统中的分层训练过程建模为多层Stackelberg博弈,其中子博弈通过效用函数相互连接。我们推导出纳什均衡策略和封闭解来指导参与者。由于充分把握了参与者之间的内在利益关系,所提出的机制可以以低成本换取高模型性能。仿真结果表明了该机制的有效性,揭示了利益相关者对数据资源分配的依赖关系。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
An Incentive Mechanism for Big Data Trading in End-Edge-Cloud Hierarchical Federated Learning
As a compelling collaborative machine learning framework in the big data era, federated learning allows multiple participants to jointly train a model without revealing their private data. To further leverage the ubiquitous resources in end-edge-cloud systems, hierarchical federated learning (HFL) focuses on the layered feature to relieve the excessive communication overhead and the risk of data leakage. For end devices are often considered as self-interested and reluctant to join in model training, encouraging them to participate becomes an emerging and challenging issue, which deeply impacts training performance and has not been well considered yet. This paper proposes an incentive mechanism for HFL in end-edge-cloud systems, which motivates end devices to contribute data for model training. The hierarchical training process in end-edge-cloud systems is modeled as a multi-layer Stackelberg game where sub-games are interconnected through the utility functions. We derive the Nash equilibrium strategies and closed-form solutions to guide players. Due to fully grasping the inner interest relationship among players, the proposed mechanism could exchange the low costs for the high model performance. Simulations demonstrate the effectiveness of the proposed mechanism and reveal stakeholder's dependencies on the allocation of data resources.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
A Blockchain-based Energy Trading Scheme for Dynamic Charging of Electric Vehicles Algebraic Design of a Class of Rate 1/3 Quasi-Cyclic LDPC Codes A Fast and Scalable Resource Allocation Scheme for End-to-End Network Slices Modelling of Multi-Tier Handover in LiFi Networks Enabling Efficient Scheduling Policy in Intelligent Reflecting Surface Aided Federated Learning
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1