{"title":"MODA: Model Ownership Deprivation Attack in Asynchronous Federated Learning","authors":"Xiaoyu Zhang, Shen Lin, Chao Chen, Xiaofeng Chen","doi":"10.1109/TDSC.2023.3348204","DOIUrl":null,"url":null,"abstract":"Training a deep learning model from scratch requires a great deal of available labeled data, computation resources, and expert knowledge. Thus, the time-consuming and complicated learning procedure catapulted the trained model to valuable intellectual property (IP), spurring interest from attackers in model copyright infringement and stealing. Recently, a new defense approach leverages watermarking techniques to inject watermarks into the training procedure and verify model ownership when necessary. To our best knowledge, there is no research work on model ownership stealing attacks in federated learning, and the existing defense or mitigation methods can not be directly used for federated learning scenarios. In this article, we introduce watermarking neural networks in asynchronous federated learning and propose a novel model privacy attack, dubbed model ownership deprivation attack (MODA). MODA is launched by an inside adversarial participant, targeting occupying and depriving the remaining participants’ (victims) copyright to achieve his maximum profit. The extensive experimental results on five benchmark datasets (MNIST, Fashion-MNIST, GTSRB, SVHN, CIFAR10) show that MODA is highly effective in a two-participant learning scenario with a minor impact on model's performance. When extending MODA into multiple participants scenario, MODA still maintains high attack success rate and classification accuracy. Compared to the state-of-the-art works, MODA has a higher attack success rate than the black-box solution and comparable efficacy with the approach in the white-box scenario.","PeriodicalId":13047,"journal":{"name":"IEEE Transactions on Dependable and Secure Computing","volume":null,"pages":null},"PeriodicalIF":7.0000,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Dependable and Secure Computing","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1109/TDSC.2023.3348204","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 3
Abstract
Training a deep learning model from scratch requires a great deal of available labeled data, computation resources, and expert knowledge. Thus, the time-consuming and complicated learning procedure catapulted the trained model to valuable intellectual property (IP), spurring interest from attackers in model copyright infringement and stealing. Recently, a new defense approach leverages watermarking techniques to inject watermarks into the training procedure and verify model ownership when necessary. To our best knowledge, there is no research work on model ownership stealing attacks in federated learning, and the existing defense or mitigation methods can not be directly used for federated learning scenarios. In this article, we introduce watermarking neural networks in asynchronous federated learning and propose a novel model privacy attack, dubbed model ownership deprivation attack (MODA). MODA is launched by an inside adversarial participant, targeting occupying and depriving the remaining participants’ (victims) copyright to achieve his maximum profit. The extensive experimental results on five benchmark datasets (MNIST, Fashion-MNIST, GTSRB, SVHN, CIFAR10) show that MODA is highly effective in a two-participant learning scenario with a minor impact on model's performance. When extending MODA into multiple participants scenario, MODA still maintains high attack success rate and classification accuracy. Compared to the state-of-the-art works, MODA has a higher attack success rate than the black-box solution and comparable efficacy with the approach in the white-box scenario.
期刊介绍:
The "IEEE Transactions on Dependable and Secure Computing (TDSC)" is a prestigious journal that publishes high-quality, peer-reviewed research in the field of computer science, specifically targeting the development of dependable and secure computing systems and networks. This journal is dedicated to exploring the fundamental principles, methodologies, and mechanisms that enable the design, modeling, and evaluation of systems that meet the required levels of reliability, security, and performance.
The scope of TDSC includes research on measurement, modeling, and simulation techniques that contribute to the understanding and improvement of system performance under various constraints. It also covers the foundations necessary for the joint evaluation, verification, and design of systems that balance performance, security, and dependability.
By publishing archival research results, TDSC aims to provide a valuable resource for researchers, engineers, and practitioners working in the areas of cybersecurity, fault tolerance, and system reliability. The journal's focus on cutting-edge research ensures that it remains at the forefront of advancements in the field, promoting the development of technologies that are critical for the functioning of modern, complex systems.