Zipeng Ye, Wenjian Luo, Ruizhuo Zhang, Hongwei Zhang, Yuhui Shi, Yan Jia
{"title":"揭示具有更高特征保真度的 DNN 训练数据的进化攻击","authors":"Zipeng Ye, Wenjian Luo, Ruizhuo Zhang, Hongwei Zhang, Yuhui Shi, Yan Jia","doi":"10.1109/TDSC.2023.3347225","DOIUrl":null,"url":null,"abstract":"Model inversion attacks aim to reveal information about sensitive training data of AI models, which may lead to serious privacy leakage. However, existing attack methods have limitations in reconstructing training data with higher feature fidelity. In this article, we propose an evolutionary model inversion attack approach (EvoMI) and empirically demonstrate that combined with the systematic search in the multi-degree-of-freedom latent space of the generative model, the simple use of an evolutionary algorithm can effectively improve the attack performance. Concretely, at first, we search for latent vectors which can generate images close to the attack target in the latent space with low-degree of freedom. Generally, the low-freedom constraint will reduce the probability of getting a local optima compared to existing methods that directly search for latent vectors in the high-freedom space. Consequently, we introduce a mutation operation to expand the search domain, thus further reduce the possibility of obtaining a local optima. Finally, we treat the searched latent vectors as the initial values of the post-processing and relax the constraint to further optimize the latent vectors in a higher-freedom space. Our proposed method is conceptually simple and easy to implement, yet it achieves substantial improvements and outperforms the state-of-the-art methods significantly.","PeriodicalId":13047,"journal":{"name":"IEEE Transactions on Dependable and Secure Computing","volume":null,"pages":null},"PeriodicalIF":7.0000,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"An Evolutionary Attack for Revealing Training Data of DNNs With Higher Feature Fidelity\",\"authors\":\"Zipeng Ye, Wenjian Luo, Ruizhuo Zhang, Hongwei Zhang, Yuhui Shi, Yan Jia\",\"doi\":\"10.1109/TDSC.2023.3347225\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Model inversion attacks aim to reveal information about sensitive training data of AI models, which may lead to serious privacy leakage. However, existing attack methods have limitations in reconstructing training data with higher feature fidelity. In this article, we propose an evolutionary model inversion attack approach (EvoMI) and empirically demonstrate that combined with the systematic search in the multi-degree-of-freedom latent space of the generative model, the simple use of an evolutionary algorithm can effectively improve the attack performance. Concretely, at first, we search for latent vectors which can generate images close to the attack target in the latent space with low-degree of freedom. Generally, the low-freedom constraint will reduce the probability of getting a local optima compared to existing methods that directly search for latent vectors in the high-freedom space. Consequently, we introduce a mutation operation to expand the search domain, thus further reduce the possibility of obtaining a local optima. Finally, we treat the searched latent vectors as the initial values of the post-processing and relax the constraint to further optimize the latent vectors in a higher-freedom space. Our proposed method is conceptually simple and easy to implement, yet it achieves substantial improvements and outperforms the state-of-the-art methods significantly.\",\"PeriodicalId\":13047,\"journal\":{\"name\":\"IEEE Transactions on Dependable and Secure Computing\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":7.0000,\"publicationDate\":\"2024-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Dependable and Secure Computing\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1109/TDSC.2023.3347225\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Dependable and Secure Computing","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1109/TDSC.2023.3347225","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
An Evolutionary Attack for Revealing Training Data of DNNs With Higher Feature Fidelity
Model inversion attacks aim to reveal information about sensitive training data of AI models, which may lead to serious privacy leakage. However, existing attack methods have limitations in reconstructing training data with higher feature fidelity. In this article, we propose an evolutionary model inversion attack approach (EvoMI) and empirically demonstrate that combined with the systematic search in the multi-degree-of-freedom latent space of the generative model, the simple use of an evolutionary algorithm can effectively improve the attack performance. Concretely, at first, we search for latent vectors which can generate images close to the attack target in the latent space with low-degree of freedom. Generally, the low-freedom constraint will reduce the probability of getting a local optima compared to existing methods that directly search for latent vectors in the high-freedom space. Consequently, we introduce a mutation operation to expand the search domain, thus further reduce the possibility of obtaining a local optima. Finally, we treat the searched latent vectors as the initial values of the post-processing and relax the constraint to further optimize the latent vectors in a higher-freedom space. Our proposed method is conceptually simple and easy to implement, yet it achieves substantial improvements and outperforms the state-of-the-art methods significantly.
期刊介绍:
The "IEEE Transactions on Dependable and Secure Computing (TDSC)" is a prestigious journal that publishes high-quality, peer-reviewed research in the field of computer science, specifically targeting the development of dependable and secure computing systems and networks. This journal is dedicated to exploring the fundamental principles, methodologies, and mechanisms that enable the design, modeling, and evaluation of systems that meet the required levels of reliability, security, and performance.
The scope of TDSC includes research on measurement, modeling, and simulation techniques that contribute to the understanding and improvement of system performance under various constraints. It also covers the foundations necessary for the joint evaluation, verification, and design of systems that balance performance, security, and dependability.
By publishing archival research results, TDSC aims to provide a valuable resource for researchers, engineers, and practitioners working in the areas of cybersecurity, fault tolerance, and system reliability. The journal's focus on cutting-edge research ensures that it remains at the forefront of advancements in the field, promoting the development of technologies that are critical for the functioning of modern, complex systems.