{"title":"CMI: Client-Targeted Membership Inference in Federated Learning","authors":"Tianhang Zheng, Baochun Li","doi":"10.1109/TDSC.2023.3346692","DOIUrl":null,"url":null,"abstract":"Membership inference is a popular benchmark attack to evaluate the privacy risk of a machine learning model or a learning scheme. However, in federated learning, membership inference is still under-explored due to several issues. For instance, some assumptions in prior works may not be practical in federated learning. Most existing membership inference methods stand on those impractical assumptions or lack generalization ability, which may misestimate the privacy risk. To address these issues, we propose CMI, an attack framework armed by a targeted poisoning method, to conduct a critical evaluation of client-targeted membership inference in federated learning. Under CMI, we consider a strong adversary, refine the prior impractical assumptions, and apply simple but generalizable attack methods. The evaluation results on multiple datasets demonstrate the efficacy of CMI under identically independently distributed (i.i.d.) and non-i.i.d. settings. In terms of the defenses, although differetially private stochatic gradient descent (DP-SGD) is effective under the i.i.d. setting, it does not provide satisfactory protection under label-biased non-i.i.d. settings. Thus, we propose RR-Label, a modified random response algorithm, to defend against membership inference. Compared to DP-SGD and Random Response Top-k (RRTop-k), RR-Label enables a better trade-off between model utility and defensive performance under label-biased non-i.i.d. settings.","PeriodicalId":13047,"journal":{"name":"IEEE Transactions on Dependable and Secure Computing","volume":null,"pages":null},"PeriodicalIF":7.0000,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Dependable and Secure Computing","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1109/TDSC.2023.3346692","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 1
Abstract
Membership inference is a popular benchmark attack to evaluate the privacy risk of a machine learning model or a learning scheme. However, in federated learning, membership inference is still under-explored due to several issues. For instance, some assumptions in prior works may not be practical in federated learning. Most existing membership inference methods stand on those impractical assumptions or lack generalization ability, which may misestimate the privacy risk. To address these issues, we propose CMI, an attack framework armed by a targeted poisoning method, to conduct a critical evaluation of client-targeted membership inference in federated learning. Under CMI, we consider a strong adversary, refine the prior impractical assumptions, and apply simple but generalizable attack methods. The evaluation results on multiple datasets demonstrate the efficacy of CMI under identically independently distributed (i.i.d.) and non-i.i.d. settings. In terms of the defenses, although differetially private stochatic gradient descent (DP-SGD) is effective under the i.i.d. setting, it does not provide satisfactory protection under label-biased non-i.i.d. settings. Thus, we propose RR-Label, a modified random response algorithm, to defend against membership inference. Compared to DP-SGD and Random Response Top-k (RRTop-k), RR-Label enables a better trade-off between model utility and defensive performance under label-biased non-i.i.d. settings.
期刊介绍:
The "IEEE Transactions on Dependable and Secure Computing (TDSC)" is a prestigious journal that publishes high-quality, peer-reviewed research in the field of computer science, specifically targeting the development of dependable and secure computing systems and networks. This journal is dedicated to exploring the fundamental principles, methodologies, and mechanisms that enable the design, modeling, and evaluation of systems that meet the required levels of reliability, security, and performance.
The scope of TDSC includes research on measurement, modeling, and simulation techniques that contribute to the understanding and improvement of system performance under various constraints. It also covers the foundations necessary for the joint evaluation, verification, and design of systems that balance performance, security, and dependability.
By publishing archival research results, TDSC aims to provide a valuable resource for researchers, engineers, and practitioners working in the areas of cybersecurity, fault tolerance, and system reliability. The journal's focus on cutting-edge research ensures that it remains at the forefront of advancements in the field, promoting the development of technologies that are critical for the functioning of modern, complex systems.