Hossein Abedi Khorasgani, Noman Mohammed, Yang Wang
{"title":"预训练模型的属性推理隐私保护","authors":"Hossein Abedi Khorasgani, Noman Mohammed, Yang Wang","doi":"10.1007/s10207-024-00839-7","DOIUrl":null,"url":null,"abstract":"<p>With the increasing popularity of machine learning (ML) in image processing, privacy concerns have emerged as a significant issue in deploying and using ML services. However, current privacy protection approaches often require computationally expensive training from scratch or extensive fine-tuning of models, posing significant barriers to the development of privacy-conscious models, particularly for smaller organizations seeking to comply with data privacy laws. In this paper, we address the privacy challenges in computer vision by investigating the effectiveness of two recent fine-tuning methods, Model Reprogramming and Low-Rank Adaptation. We adapt these techniques to provide attribute protection for pre-trained models, minimizing computational overhead and training time. Specifically, we modify the models to produce privacy-preserving latent representations of images that cannot be used to identify unintended attributes. We integrate these methods into an adversarial min–max framework, allowing us to conceal sensitive information from feature outputs without extensive modifications to the pre-trained model, but rather focusing on a small set of new parameters. We demonstrate the effectiveness of our methods by conducting experiments on the CelebA dataset, achieving state-of-the-art performance while significantly reducing computational complexity and cost. Our research provides a valuable contribution to the field of computer vision and privacy, offering practical solutions to enhance the privacy of machine learning services without compromising efficiency.</p>","PeriodicalId":50316,"journal":{"name":"International Journal of Information Security","volume":"44 1","pages":""},"PeriodicalIF":2.4000,"publicationDate":"2024-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Attribute inference privacy protection for pre-trained models\",\"authors\":\"Hossein Abedi Khorasgani, Noman Mohammed, Yang Wang\",\"doi\":\"10.1007/s10207-024-00839-7\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>With the increasing popularity of machine learning (ML) in image processing, privacy concerns have emerged as a significant issue in deploying and using ML services. However, current privacy protection approaches often require computationally expensive training from scratch or extensive fine-tuning of models, posing significant barriers to the development of privacy-conscious models, particularly for smaller organizations seeking to comply with data privacy laws. In this paper, we address the privacy challenges in computer vision by investigating the effectiveness of two recent fine-tuning methods, Model Reprogramming and Low-Rank Adaptation. We adapt these techniques to provide attribute protection for pre-trained models, minimizing computational overhead and training time. Specifically, we modify the models to produce privacy-preserving latent representations of images that cannot be used to identify unintended attributes. We integrate these methods into an adversarial min–max framework, allowing us to conceal sensitive information from feature outputs without extensive modifications to the pre-trained model, but rather focusing on a small set of new parameters. We demonstrate the effectiveness of our methods by conducting experiments on the CelebA dataset, achieving state-of-the-art performance while significantly reducing computational complexity and cost. Our research provides a valuable contribution to the field of computer vision and privacy, offering practical solutions to enhance the privacy of machine learning services without compromising efficiency.</p>\",\"PeriodicalId\":50316,\"journal\":{\"name\":\"International Journal of Information Security\",\"volume\":\"44 1\",\"pages\":\"\"},\"PeriodicalIF\":2.4000,\"publicationDate\":\"2024-04-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Information Security\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1007/s10207-024-00839-7\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Information Security","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s10207-024-00839-7","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
摘要
随着机器学习(ML)在图像处理领域的日益普及,隐私问题已成为部署和使用 ML 服务的一个重要问题。然而,当前的隐私保护方法往往需要从头开始进行计算成本高昂的训练,或者对模型进行大量微调,这对开发具有隐私意识的模型造成了巨大障碍,尤其是对那些寻求遵守数据隐私法的小型组织而言。在本文中,我们通过研究最近推出的两种微调方法--模型重编程(Model Reprogramming)和低级别自适应(Low-Rank Adaptation)的有效性,来应对计算机视觉领域的隐私挑战。我们调整这些技术,为预先训练好的模型提供属性保护,最大限度地减少计算开销和训练时间。具体来说,我们对模型进行了修改,以生成保护隐私的图像潜在表征,这些表征不能用于识别非预期属性。我们将这些方法整合到对抗性最小最大框架中,这样就可以在不对预先训练的模型进行大量修改的情况下,从特征输出中隐藏敏感信息,而只需关注一小部分新参数。我们在 CelebA 数据集上进行了实验,证明了我们方法的有效性,在大幅降低计算复杂度和成本的同时,实现了最先进的性能。我们的研究为计算机视觉和隐私领域做出了有价值的贡献,为在不影响效率的情况下增强机器学习服务的隐私性提供了实用的解决方案。
Attribute inference privacy protection for pre-trained models
With the increasing popularity of machine learning (ML) in image processing, privacy concerns have emerged as a significant issue in deploying and using ML services. However, current privacy protection approaches often require computationally expensive training from scratch or extensive fine-tuning of models, posing significant barriers to the development of privacy-conscious models, particularly for smaller organizations seeking to comply with data privacy laws. In this paper, we address the privacy challenges in computer vision by investigating the effectiveness of two recent fine-tuning methods, Model Reprogramming and Low-Rank Adaptation. We adapt these techniques to provide attribute protection for pre-trained models, minimizing computational overhead and training time. Specifically, we modify the models to produce privacy-preserving latent representations of images that cannot be used to identify unintended attributes. We integrate these methods into an adversarial min–max framework, allowing us to conceal sensitive information from feature outputs without extensive modifications to the pre-trained model, but rather focusing on a small set of new parameters. We demonstrate the effectiveness of our methods by conducting experiments on the CelebA dataset, achieving state-of-the-art performance while significantly reducing computational complexity and cost. Our research provides a valuable contribution to the field of computer vision and privacy, offering practical solutions to enhance the privacy of machine learning services without compromising efficiency.
期刊介绍:
The International Journal of Information Security is an English language periodical on research in information security which offers prompt publication of important technical work, whether theoretical, applicable, or related to implementation.
Coverage includes system security: intrusion detection, secure end systems, secure operating systems, database security, security infrastructures, security evaluation; network security: Internet security, firewalls, mobile security, security agents, protocols, anti-virus and anti-hacker measures; content protection: watermarking, software protection, tamper resistant software; applications: electronic commerce, government, health, telecommunications, mobility.