具有异质客户期望的联合学习:博弈论方法

IF 8.9 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE IEEE Transactions on Knowledge and Data Engineering Pub Date : 2024-09-19 DOI:10.1109/TKDE.2024.3464488
Sheng Shen;Chi Liu;Teng Joon Lim
{"title":"具有异质客户期望的联合学习:博弈论方法","authors":"Sheng Shen;Chi Liu;Teng Joon Lim","doi":"10.1109/TKDE.2024.3464488","DOIUrl":null,"url":null,"abstract":"In federated learning (FL), local models are trained independently by clients, local model parameters are shared with a global aggregator or server, and then the updated model is used to initialize the next round of local training. FL and its variants have become synonymous with privacy-preserving distributed machine learning. However, most FL methods have maximization of model accuracy as their sole objective, and rarely are the clients’ needs and constraints considered. In this paper, we consider that clients have differing performance expectations and resource constraints, and we assume local data quality can be improved at a cost. In this light, we treat FL in the training phase as a game in satisfaction form that seeks to satisfy all clients’ expectations. We propose two novel FL methods, a deep reinforcement learning method and a stochastic method, that embrace this design approach. We also account for the scenario where certain clients can adjust their actions even after being satisfied, by introducing probabilistic parameters in both of our methods. The experimental results demonstrate that our proposed methods converge quickly to a lower cost solution than competing methods. Furthermore, it was found that the probabilistic parameters facilitate the attainment of satisfaction equilibria (SE), addressing scenarios where reaching SEs may be challenging within the confines of traditional games in satisfaction form.","PeriodicalId":13496,"journal":{"name":"IEEE Transactions on Knowledge and Data Engineering","volume":"36 12","pages":"8220-8237"},"PeriodicalIF":8.9000,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Federated Learning With Heterogeneous Client Expectations: A Game Theory Approach\",\"authors\":\"Sheng Shen;Chi Liu;Teng Joon Lim\",\"doi\":\"10.1109/TKDE.2024.3464488\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In federated learning (FL), local models are trained independently by clients, local model parameters are shared with a global aggregator or server, and then the updated model is used to initialize the next round of local training. FL and its variants have become synonymous with privacy-preserving distributed machine learning. However, most FL methods have maximization of model accuracy as their sole objective, and rarely are the clients’ needs and constraints considered. In this paper, we consider that clients have differing performance expectations and resource constraints, and we assume local data quality can be improved at a cost. In this light, we treat FL in the training phase as a game in satisfaction form that seeks to satisfy all clients’ expectations. We propose two novel FL methods, a deep reinforcement learning method and a stochastic method, that embrace this design approach. We also account for the scenario where certain clients can adjust their actions even after being satisfied, by introducing probabilistic parameters in both of our methods. The experimental results demonstrate that our proposed methods converge quickly to a lower cost solution than competing methods. Furthermore, it was found that the probabilistic parameters facilitate the attainment of satisfaction equilibria (SE), addressing scenarios where reaching SEs may be challenging within the confines of traditional games in satisfaction form.\",\"PeriodicalId\":13496,\"journal\":{\"name\":\"IEEE Transactions on Knowledge and Data Engineering\",\"volume\":\"36 12\",\"pages\":\"8220-8237\"},\"PeriodicalIF\":8.9000,\"publicationDate\":\"2024-09-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Knowledge and Data Engineering\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10684493/\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Knowledge and Data Engineering","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10684493/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

在联合学习(FL)中,本地模型由客户端独立训练,本地模型参数与全局聚合器或服务器共享,然后使用更新后的模型初始化下一轮本地训练。FL 及其变体已成为保护隐私的分布式机器学习的代名词。然而,大多数 FL 方法都以最大化模型准确性为唯一目标,很少考虑客户的需求和约束。在本文中,我们考虑到客户有不同的性能期望和资源限制,并假设本地数据质量的提高是有代价的。有鉴于此,我们将训练阶段的 FL 视为满足形式的博弈,力求满足所有客户的期望。我们提出了两种新颖的 FL 方法,一种是深度强化学习方法,另一种是随机方法,它们都采用了这种设计方法。我们还在这两种方法中引入了概率参数,以考虑某些客户在获得满足后仍可调整其行动的情况。实验结果表明,与其他竞争方法相比,我们提出的方法能快速收敛到成本更低的解决方案。此外,我们还发现概率参数有助于实现满意均衡(SE),从而解决了在传统满意形式博弈中实现满意均衡可能面临挑战的问题。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Federated Learning With Heterogeneous Client Expectations: A Game Theory Approach
In federated learning (FL), local models are trained independently by clients, local model parameters are shared with a global aggregator or server, and then the updated model is used to initialize the next round of local training. FL and its variants have become synonymous with privacy-preserving distributed machine learning. However, most FL methods have maximization of model accuracy as their sole objective, and rarely are the clients’ needs and constraints considered. In this paper, we consider that clients have differing performance expectations and resource constraints, and we assume local data quality can be improved at a cost. In this light, we treat FL in the training phase as a game in satisfaction form that seeks to satisfy all clients’ expectations. We propose two novel FL methods, a deep reinforcement learning method and a stochastic method, that embrace this design approach. We also account for the scenario where certain clients can adjust their actions even after being satisfied, by introducing probabilistic parameters in both of our methods. The experimental results demonstrate that our proposed methods converge quickly to a lower cost solution than competing methods. Furthermore, it was found that the probabilistic parameters facilitate the attainment of satisfaction equilibria (SE), addressing scenarios where reaching SEs may be challenging within the confines of traditional games in satisfaction form.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
IEEE Transactions on Knowledge and Data Engineering
IEEE Transactions on Knowledge and Data Engineering 工程技术-工程:电子与电气
CiteScore
11.70
自引率
3.40%
发文量
515
审稿时长
6 months
期刊介绍: The IEEE Transactions on Knowledge and Data Engineering encompasses knowledge and data engineering aspects within computer science, artificial intelligence, electrical engineering, computer engineering, and related fields. It provides an interdisciplinary platform for disseminating new developments in knowledge and data engineering and explores the practicality of these concepts in both hardware and software. Specific areas covered include knowledge-based and expert systems, AI techniques for knowledge and data management, tools, and methodologies, distributed processing, real-time systems, architectures, data management practices, database design, query languages, security, fault tolerance, statistical databases, algorithms, performance evaluation, and applications.
期刊最新文献
SE Factual Knowledge in Frozen Giant Code Model: A Study on FQN and Its Retrieval Online Dynamic Hybrid Broad Learning System for Real-Time Safety Assessment of Dynamic Systems Iterative Soft Prompt-Tuning for Unsupervised Domain Adaptation A Derivative Topic Dissemination Model Based on Representation Learning and Topic Relevance L-ASCRA: A Linearithmic Time Approximate Spectral Clustering Algorithm Using Topologically-Preserved Representatives
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1