基于层次强化机制的边缘学习激励驱动的长期优化

Yi Liu, Leijie Wu, Yufeng Zhan, Song Guo, Zicong Hong
{"title":"基于层次强化机制的边缘学习激励驱动的长期优化","authors":"Yi Liu, Leijie Wu, Yufeng Zhan, Song Guo, Zicong Hong","doi":"10.1109/ICDCS51616.2021.00013","DOIUrl":null,"url":null,"abstract":"Edge Learning is an emerging distributed machine learning in mobile edge network. Limited works have designed mechanisms to incentivize edge nodes to participate in edge learning. However, their mechanisms only consider myopia optimization on resource consumption, which results in the lack of learning algorithm performance guarantee and longterm sustainability. In this paper, we propose Chiron, an incentive-driven long-term mechanism for edge learning based on hierarchical deep reinforcement learning. First, our optimization goal combines learning-algorithms metric (i.e., model accuracy) with system metric (i.e., learning time, and resource consumption), which can improve edge learning quality under a fixed training budget. Second, we present a two-layer H-DRL design with exterior and inner agents to achieve both long-term and short-term optimization for edge learning, respectively. Finally, experiments on three different real-world datasets are conducted to demonstrate the superiority of our proposed approach. In particular, compared with the state-of-the-art methods under the same budget constraint, the final global model accuracy and time efficiency can be increased by 6.5 % and 39 %, respectively. Our implementation is available at https://github.com/Joey61Liuyi/Chiron.","PeriodicalId":222376,"journal":{"name":"2021 IEEE 41st International Conference on Distributed Computing Systems (ICDCS)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Incentive-Driven Long-term Optimization for Edge Learning by Hierarchical Reinforcement Mechanism\",\"authors\":\"Yi Liu, Leijie Wu, Yufeng Zhan, Song Guo, Zicong Hong\",\"doi\":\"10.1109/ICDCS51616.2021.00013\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Edge Learning is an emerging distributed machine learning in mobile edge network. Limited works have designed mechanisms to incentivize edge nodes to participate in edge learning. However, their mechanisms only consider myopia optimization on resource consumption, which results in the lack of learning algorithm performance guarantee and longterm sustainability. In this paper, we propose Chiron, an incentive-driven long-term mechanism for edge learning based on hierarchical deep reinforcement learning. First, our optimization goal combines learning-algorithms metric (i.e., model accuracy) with system metric (i.e., learning time, and resource consumption), which can improve edge learning quality under a fixed training budget. Second, we present a two-layer H-DRL design with exterior and inner agents to achieve both long-term and short-term optimization for edge learning, respectively. Finally, experiments on three different real-world datasets are conducted to demonstrate the superiority of our proposed approach. In particular, compared with the state-of-the-art methods under the same budget constraint, the final global model accuracy and time efficiency can be increased by 6.5 % and 39 %, respectively. Our implementation is available at https://github.com/Joey61Liuyi/Chiron.\",\"PeriodicalId\":222376,\"journal\":{\"name\":\"2021 IEEE 41st International Conference on Distributed Computing Systems (ICDCS)\",\"volume\":\"30 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE 41st International Conference on Distributed Computing Systems (ICDCS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICDCS51616.2021.00013\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE 41st International Conference on Distributed Computing Systems (ICDCS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICDCS51616.2021.00013","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

边缘学习是一种新兴的移动边缘网络分布式机器学习。有限的研究设计了激励边缘节点参与边缘学习的机制。然而,它们的机制只考虑了资源消耗的短视优化,导致学习算法缺乏性能保证和长期可持续性。在本文中,我们提出了一种基于分层深度强化学习的激励驱动的边缘学习长期机制Chiron。首先,我们的优化目标结合了学习算法度量(即模型精度)和系统度量(即学习时间和资源消耗),这可以在固定的训练预算下提高边缘学习质量。其次,我们提出了一种具有外部和内部代理的双层H-DRL设计,分别实现了边缘学习的长期和短期优化。最后,在三个不同的真实数据集上进行了实验,以证明我们提出的方法的优越性。特别是,在相同的预算约束下,与目前最先进的方法相比,最终的全局模型精度和时间效率分别提高了6.5%和39%。我们的实现可以在https://github.com/Joey61Liuyi/Chiron上获得。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Incentive-Driven Long-term Optimization for Edge Learning by Hierarchical Reinforcement Mechanism
Edge Learning is an emerging distributed machine learning in mobile edge network. Limited works have designed mechanisms to incentivize edge nodes to participate in edge learning. However, their mechanisms only consider myopia optimization on resource consumption, which results in the lack of learning algorithm performance guarantee and longterm sustainability. In this paper, we propose Chiron, an incentive-driven long-term mechanism for edge learning based on hierarchical deep reinforcement learning. First, our optimization goal combines learning-algorithms metric (i.e., model accuracy) with system metric (i.e., learning time, and resource consumption), which can improve edge learning quality under a fixed training budget. Second, we present a two-layer H-DRL design with exterior and inner agents to achieve both long-term and short-term optimization for edge learning, respectively. Finally, experiments on three different real-world datasets are conducted to demonstrate the superiority of our proposed approach. In particular, compared with the state-of-the-art methods under the same budget constraint, the final global model accuracy and time efficiency can be increased by 6.5 % and 39 %, respectively. Our implementation is available at https://github.com/Joey61Liuyi/Chiron.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Practical Location Privacy Attacks and Defense on Point-of-interest Aggregates Hand-Key: Leveraging Multiple Hand Biometrics for Attack-Resilient User Authentication Using COTS RFID Recognizing 3D Orientation of a Two-RFID-Tag Labeled Object in Multipath Environments Using Deep Transfer Learning The Vertical Cuckoo Filters: A Family of Insertion-friendly Sketches for Online Applications Dyconits: Scaling Minecraft-like Services through Dynamically Managed Inconsistency
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1