TPFL:基于置信度聚类的蔡特林个性化联合学习

Rasoul Jafari Gohari, Laya Aliahmadipour, Ezat Valipour
{"title":"TPFL:基于置信度聚类的蔡特林个性化联合学习","authors":"Rasoul Jafari Gohari, Laya Aliahmadipour, Ezat Valipour","doi":"arxiv-2409.10392","DOIUrl":null,"url":null,"abstract":"The world of Machine Learning (ML) has witnessed rapid changes in terms of\nnew models and ways to process users data. The majority of work that has been\ndone is focused on Deep Learning (DL) based approaches. However, with the\nemergence of new algorithms such as the Tsetlin Machine (TM) algorithm, there\nis growing interest in exploring alternative approaches that may offer unique\nadvantages in certain domains or applications. One of these domains is\nFederated Learning (FL), in which users privacy is of utmost importance. Due to\nits novelty, FL has seen a surge in the incorporation of personalization\ntechniques to enhance model accuracy while maintaining user privacy under\npersonalized conditions. In this work, we propose a novel approach dubbed TPFL:\nTsetlin-Personalized Federated Learning, in which models are grouped into\nclusters based on their confidence towards a specific class. In this way,\nclustering can benefit from two key advantages. Firstly, clients share only\nwhat they are confident about, resulting in the elimination of wrongful weight\naggregation among clients whose data for a specific class may have not been\nenough during the training. This phenomenon is prevalent when the data are\nnon-Independent and Identically Distributed (non-IID). Secondly, by sharing\nonly weights towards a specific class, communication cost is substantially\nreduced, making TPLF efficient in terms of both accuracy and communication\ncost. The results of TPFL demonstrated the highest accuracy on three different\ndatasets; namely MNIST, FashionMNIST and FEMNIST.","PeriodicalId":501422,"journal":{"name":"arXiv - CS - Distributed, Parallel, and Cluster Computing","volume":"52 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"TPFL: Tsetlin-Personalized Federated Learning with Confidence-Based Clustering\",\"authors\":\"Rasoul Jafari Gohari, Laya Aliahmadipour, Ezat Valipour\",\"doi\":\"arxiv-2409.10392\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The world of Machine Learning (ML) has witnessed rapid changes in terms of\\nnew models and ways to process users data. The majority of work that has been\\ndone is focused on Deep Learning (DL) based approaches. However, with the\\nemergence of new algorithms such as the Tsetlin Machine (TM) algorithm, there\\nis growing interest in exploring alternative approaches that may offer unique\\nadvantages in certain domains or applications. One of these domains is\\nFederated Learning (FL), in which users privacy is of utmost importance. Due to\\nits novelty, FL has seen a surge in the incorporation of personalization\\ntechniques to enhance model accuracy while maintaining user privacy under\\npersonalized conditions. In this work, we propose a novel approach dubbed TPFL:\\nTsetlin-Personalized Federated Learning, in which models are grouped into\\nclusters based on their confidence towards a specific class. In this way,\\nclustering can benefit from two key advantages. Firstly, clients share only\\nwhat they are confident about, resulting in the elimination of wrongful weight\\naggregation among clients whose data for a specific class may have not been\\nenough during the training. This phenomenon is prevalent when the data are\\nnon-Independent and Identically Distributed (non-IID). Secondly, by sharing\\nonly weights towards a specific class, communication cost is substantially\\nreduced, making TPLF efficient in terms of both accuracy and communication\\ncost. The results of TPFL demonstrated the highest accuracy on three different\\ndatasets; namely MNIST, FashionMNIST and FEMNIST.\",\"PeriodicalId\":501422,\"journal\":{\"name\":\"arXiv - CS - Distributed, Parallel, and Cluster Computing\",\"volume\":\"52 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Distributed, Parallel, and Cluster Computing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.10392\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Distributed, Parallel, and Cluster Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.10392","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

机器学习(ML)领域在新模型和用户数据处理方法方面发生了日新月异的变化。大部分工作都集中在基于深度学习(DL)的方法上。然而,随着 Tsetlin Machine(TM)算法等新算法的出现,人们对探索替代方法的兴趣与日俱增,这些方法可能会在某些领域或应用中提供独特的优势。其中一个领域是联合学习(FL),在该领域中,用户隐私至关重要。由于其新颖性,FL 在个性化条件下为提高模型准确性同时维护用户隐私而采用个性化技术的情况激增。在这项工作中,我们提出了一种被称为 TPFL:Tsetlin-个性化联合学习的新方法,根据模型对特定类别的置信度将其分组。通过这种方式,聚类可以受益于两个关键优势。首先,客户只分享他们有信心的内容,从而避免了客户之间错误的权重划分,因为在训练过程中,客户对特定类别的数据可能不够充分。这种现象在数据非独立和相同分布(non-IID)的情况下非常普遍。其次,通过只共享特定类别的权重,通信成本大大降低,使得 TPLF 在准确性和通信成本方面都很高效。TPFL 在三个不同的数据集(即 MNIST、FashionMNIST 和 FEMNIST)上取得了最高的准确率。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
TPFL: Tsetlin-Personalized Federated Learning with Confidence-Based Clustering
The world of Machine Learning (ML) has witnessed rapid changes in terms of new models and ways to process users data. The majority of work that has been done is focused on Deep Learning (DL) based approaches. However, with the emergence of new algorithms such as the Tsetlin Machine (TM) algorithm, there is growing interest in exploring alternative approaches that may offer unique advantages in certain domains or applications. One of these domains is Federated Learning (FL), in which users privacy is of utmost importance. Due to its novelty, FL has seen a surge in the incorporation of personalization techniques to enhance model accuracy while maintaining user privacy under personalized conditions. In this work, we propose a novel approach dubbed TPFL: Tsetlin-Personalized Federated Learning, in which models are grouped into clusters based on their confidence towards a specific class. In this way, clustering can benefit from two key advantages. Firstly, clients share only what they are confident about, resulting in the elimination of wrongful weight aggregation among clients whose data for a specific class may have not been enough during the training. This phenomenon is prevalent when the data are non-Independent and Identically Distributed (non-IID). Secondly, by sharing only weights towards a specific class, communication cost is substantially reduced, making TPLF efficient in terms of both accuracy and communication cost. The results of TPFL demonstrated the highest accuracy on three different datasets; namely MNIST, FashionMNIST and FEMNIST.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Massively parallel CMA-ES with increasing population Communication Lower Bounds and Optimal Algorithms for Symmetric Matrix Computations Energy Efficiency Support for Software Defined Networks: a Serverless Computing Approach CountChain: A Decentralized Oracle Network for Counting Systems Delay Analysis of EIP-4844
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1