首页 > 最新文献

IEEE Journal on Emerging and Selected Topics in Circuits and Systems最新文献

英文 中文
Model Agnostic Contrastive Explanations for Classification Models 分类模型与模型无关的对比解释
IF 3.7 2区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-10-24 DOI: 10.1109/JETCAS.2024.3486114
Amit Dhurandhar;Tejaswini Pedapati;Avinash Balakrishnan;Pin-Yu Chen;Karthikeyan Shanmugam;Ruchir Puri
Extensive surveys on explanations that are suitable for humans, claims that an explanation being contrastive is one of its most important traits. A few methods have been proposed to generate contrastive explanations for differentiable models such as deep neural networks, where one has complete access to the model. In this work, we propose a method, Model Agnostic Contrastive Explanations Method (MACEM), that can generate contrastive explanations for any classification model where one is able to only query the class probabilities for a desired input. This allows us to generate contrastive explanations for not only neural networks, but also models such as random forests, boosted trees and even arbitrary ensembles that are still amongst the state-of-the-art when learning on tabular data. Our method is also applicable to the scenarios where only the black-box access of the model is provided, implying that we can only obtain the predictions and prediction probabilities. With the advent of larger models, it is increasingly prevalent to be working in the black-box scenario, where the user will not necessarily have access to the model weights or parameters, and will only be able to interact with the model using an API. As such, to obtain meaningful explanations we propose a principled and scalable approach to handle real and categorical features leading to novel formulations for computing pertinent positives and negatives that form the essence of a contrastive explanation. A detailed treatment of this nature where we focus on scalability and handle different data types was not performed in the previous work, which assumed all features to be positive real valued with zero being indicative of the least interesting value. We part with this strong implicit assumption and generalize these methods so as to be applicable across a much wider range of problem settings. We quantitatively as well as qualitatively validate our approach over public datasets covering diverse domains.
关于适合人类的解释的大量调查表明,解释的对比性是其最重要的特征之一。目前已经提出了几种方法来为可微分模型(如深度神经网络)生成对比性解释,在这种情况下,人们可以完全访问模型。在这项工作中,我们提出了一种名为 "模型不可知性对比解释法"(MACEM)的方法,它可以为任何分类模型生成对比解释,在这种模型中,人们只能查询所需输入的类概率。这使我们不仅能为神经网络生成对比性解释,还能为随机森林、提升树等模型生成对比性解释,甚至还能为任意集合生成对比性解释。我们的方法也适用于只提供模型黑箱访问的情况,这意味着我们只能获得预测结果和预测概率。随着大型模型的出现,在黑箱场景下工作的情况越来越普遍,在这种情况下,用户不一定能访问模型权重或参数,只能通过 API 与模型进行交互。因此,为了获得有意义的解释,我们提出了一种原则性的、可扩展的方法来处理真实的和分类的特征,从而得出新的公式来计算相关的正面和负面特征,这些特征构成了对比解释的本质。前人的研究假设所有特征都是正实值,零表示最不感兴趣的值。我们摒弃了这一强烈的隐含假设,将这些方法加以推广,使其适用于更广泛的问题设置。我们在涵盖不同领域的公共数据集上对我们的方法进行了定量和定性验证。
{"title":"Model Agnostic Contrastive Explanations for Classification Models","authors":"Amit Dhurandhar;Tejaswini Pedapati;Avinash Balakrishnan;Pin-Yu Chen;Karthikeyan Shanmugam;Ruchir Puri","doi":"10.1109/JETCAS.2024.3486114","DOIUrl":"https://doi.org/10.1109/JETCAS.2024.3486114","url":null,"abstract":"Extensive surveys on explanations that are suitable for humans, claims that an explanation being contrastive is one of its most important traits. A few methods have been proposed to generate contrastive explanations for differentiable models such as deep neural networks, where one has complete access to the model. In this work, we propose a method, Model Agnostic Contrastive Explanations Method (MACEM), that can generate contrastive explanations for any classification model where one is able to only query the class probabilities for a desired input. This allows us to generate contrastive explanations for not only neural networks, but also models such as random forests, boosted trees and even arbitrary ensembles that are still amongst the state-of-the-art when learning on tabular data. Our method is also applicable to the scenarios where only the black-box access of the model is provided, implying that we can only obtain the predictions and prediction probabilities. With the advent of larger models, it is increasingly prevalent to be working in the black-box scenario, where the user will not necessarily have access to the model weights or parameters, and will only be able to interact with the model using an API. As such, to obtain meaningful explanations we propose a principled and scalable approach to handle real and categorical features leading to novel formulations for computing pertinent positives and negatives that form the essence of a contrastive explanation. A detailed treatment of this nature where we focus on scalability and handle different data types was not performed in the previous work, which assumed all features to be positive real valued with zero being indicative of the least interesting value. We part with this strong implicit assumption and generalize these methods so as to be applicable across a much wider range of problem settings. We quantitatively as well as qualitatively validate our approach over public datasets covering diverse domains.","PeriodicalId":48827,"journal":{"name":"IEEE Journal on Emerging and Selected Topics in Circuits and Systems","volume":"14 4","pages":"789-798"},"PeriodicalIF":3.7,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142821287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Stealing the Invisible: Unveiling Pre-Trained CNN Models Through Adversarial Examples and Timing Side-Channels 窃取隐形:通过对抗性示例和定时侧信道揭开预训练 CNN 模型的神秘面纱
IF 3.7 2区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-10-23 DOI: 10.1109/JETCAS.2024.3485133
Shubhi Shukla;Manaar Alam;Pabitra Mitra;Debdeep Mukhopadhyay
Machine learning, with its myriad applications, has become an integral component of numerous AI systems. A common practice in this domain is the use of transfer learning, where a pre-trained model’s architecture, readily available to the public, is fine-tuned to suit specific tasks. As Machine Learning as a Service (MLaaS) platforms increasingly use pre-trained models in their backends, it is crucial to safeguard these architectures and understand their vulnerabilities. In this work, we present ArchWhisperer, a model fingerprinting attack approach based on the novel observation that the classification patterns of adversarial images can be used as a means to steal the models. Furthermore, the adversarial image classifications in conjunction with model inference times is used to further enhance our attack in terms of attack effectiveness as well as query budget. ArchWhisperer is designed for typical user-level access in remote MLaaS environments and it exploits varying misclassifications of adversarial images across different models to fingerprint several renowned Convolutional Neural Network (CNN) and Vision Transformer (ViT) architectures. We utilize the profiling of remote model inference times to reduce the necessary adversarial images, subsequently decreasing the number of queries required. We have presented our results over 27 pre-trained models of different CNN and ViT architectures using CIFAR-10 dataset and demonstrate a high accuracy of 88.8% while keeping the query budget under 20. This is a marked improvement compared to state-of-the-art works.
机器学习应用广泛,已成为众多人工智能系统不可或缺的组成部分。该领域的一种常见做法是使用迁移学习,即对公众可随时获得的预训练模型架构进行微调,以适应特定任务。随着机器学习即服务(MLaaS)平台越来越多地在其后端使用预先训练好的模型,保护这些架构并了解其漏洞至关重要。在这项工作中,我们提出了一种模型指纹攻击方法 ArchWhisperer,这种方法基于一种新颖的观点,即敌对图像的分类模式可被用作窃取模型的手段。此外,对抗图像分类与模型推理时间相结合,可在攻击效果和查询预算方面进一步增强我们的攻击。ArchWhisperer 专为远程 MLaaS 环境中典型的用户级访问而设计,它利用不同模型中对抗图像的不同错误分类,对几种著名的卷积神经网络(CNN)和视觉转换器(ViT)架构进行指纹识别。我们利用对远程模型推理时间的分析来减少所需的对抗图像,从而减少所需的查询次数。我们利用 CIFAR-10 数据集对不同 CNN 和 ViT 架构的 27 个预训练模型进行了分析,结果表明,在将查询预算控制在 20 次以内的同时,准确率高达 88.8%。与最先进的作品相比,这是一个明显的进步。
{"title":"Stealing the Invisible: Unveiling Pre-Trained CNN Models Through Adversarial Examples and Timing Side-Channels","authors":"Shubhi Shukla;Manaar Alam;Pabitra Mitra;Debdeep Mukhopadhyay","doi":"10.1109/JETCAS.2024.3485133","DOIUrl":"https://doi.org/10.1109/JETCAS.2024.3485133","url":null,"abstract":"Machine learning, with its myriad applications, has become an integral component of numerous AI systems. A common practice in this domain is the use of transfer learning, where a pre-trained model’s architecture, readily available to the public, is fine-tuned to suit specific tasks. As Machine Learning as a Service (MLaaS) platforms increasingly use pre-trained models in their backends, it is crucial to safeguard these architectures and understand their vulnerabilities. In this work, we present \u0000<monospace>ArchWhisperer</monospace>\u0000, a model fingerprinting attack approach based on the novel observation that the classification patterns of adversarial images can be used as a means to steal the models. Furthermore, the adversarial image classifications in conjunction with model inference times is used to further enhance our attack in terms of attack effectiveness as well as query budget. \u0000<monospace>ArchWhisperer</monospace>\u0000 is designed for typical user-level access in remote MLaaS environments and it exploits varying misclassifications of adversarial images across different models to fingerprint several renowned Convolutional Neural Network (CNN) and Vision Transformer (ViT) architectures. We utilize the profiling of remote model inference times to reduce the necessary adversarial images, subsequently decreasing the number of queries required. We have presented our results over 27 pre-trained models of different CNN and ViT architectures using CIFAR-10 dataset and demonstrate a high accuracy of 88.8% while keeping the query budget under 20. This is a marked improvement compared to state-of-the-art works.","PeriodicalId":48827,"journal":{"name":"IEEE Journal on Emerging and Selected Topics in Circuits and Systems","volume":"14 4","pages":"634-646"},"PeriodicalIF":3.7,"publicationDate":"2024-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142821169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RLFL: A Reinforcement Learning Aggregation Approach for Hybrid Federated Learning Systems Using Full and Ternary Precision 基于全精度和三元精度的混合联邦学习系统的强化学习聚合方法
IF 3.7 2区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-10-18 DOI: 10.1109/JETCAS.2024.3483554
HamidReza Imani;Jeff Anderson;Samuel Farid;Abdolah Amirany;Tarek El-Ghazawi
Federated Learning (FL) has emerged as an approach to provide a privacy-preserving and communication-efficient Machine Learning (ML) framework in mobile-edge environments which are likely to be resource-constrained and heterogeneous. Therefore, the required precision level and performance from each of the devices may vary depending upon the circumstances, giving rise to designs containing mixed-precision and quantized models. Among the various quantization schemes, binary and ternary representations are significant since they enable arrangements that can strike effective balances between performance and precision. In this paper, we propose RLFL, a hybrid ternary/full-precision FL system along with a Reinforcement Learning (RL) aggregation method with the goal of improved performance comparing to a homogeneous ternary environment. This system consists a mix of clients with full-precision and resource-constrained clients with ternary ML models. However, aggregating models with ternary and full-precision weights using traditional aggregation approaches present a challenge due to the disparity in weight magnitudes. In order to obtain an improved accuracy, we use a deep RL model to explore and optimize the amount of contribution assigned to each client’s model for aggregation in each iteration. We evaluate and compare accuracy and communication overhead of the proposed approach against the prior work for the classification of MNIST, FMNIST, and CIFAR10 datasets. Evaluation results show that the proposed RLFL system, along with its aggregation technique, outperforms the existing FL approaches in accuracy ranging from 5% to 19% while imposing negligible computation overhead.
联邦学习(FL)是一种在移动边缘环境中提供隐私保护和通信效率高的机器学习(ML)框架的方法,这种环境可能是资源受限和异构的。因此,每个设备所需的精度水平和性能可能因情况而异,这就产生了包含混合精度和量化模型的设计。在各种量化方案中,二元和三元表示法具有重要意义,因为它们可以在性能和精度之间实现有效平衡。在本文中,我们提出了一种三元/全精度混合 FL 系统 RLFL 以及一种强化学习(RL)聚合方法,目的是与同质三元环境相比提高性能。该系统由使用全精度模型的客户端和使用三元 ML 模型的资源受限客户端混合组成。然而,由于权重大小的差异,使用传统聚合方法聚合具有三元和全精度权重的模型是一项挑战。为了提高准确性,我们使用深度 RL 模型来探索和优化分配给每个客户端模型的贡献量,以便在每次迭代中进行聚合。我们对 MNIST、FMNIST 和 CIFAR10 数据集的分类进行了评估,并将所提方法的准确率和通信开销与之前的工作进行了比较。评估结果表明,所提出的 RLFL 系统及其聚合技术在准确率方面优于现有的 FL 方法,准确率在 5% 到 19% 之间,而计算开销却微乎其微。
{"title":"RLFL: A Reinforcement Learning Aggregation Approach for Hybrid Federated Learning Systems Using Full and Ternary Precision","authors":"HamidReza Imani;Jeff Anderson;Samuel Farid;Abdolah Amirany;Tarek El-Ghazawi","doi":"10.1109/JETCAS.2024.3483554","DOIUrl":"https://doi.org/10.1109/JETCAS.2024.3483554","url":null,"abstract":"Federated Learning (FL) has emerged as an approach to provide a privacy-preserving and communication-efficient Machine Learning (ML) framework in mobile-edge environments which are likely to be resource-constrained and heterogeneous. Therefore, the required precision level and performance from each of the devices may vary depending upon the circumstances, giving rise to designs containing mixed-precision and quantized models. Among the various quantization schemes, binary and ternary representations are significant since they enable arrangements that can strike effective balances between performance and precision. In this paper, we propose RLFL, a hybrid ternary/full-precision FL system along with a Reinforcement Learning (RL) aggregation method with the goal of improved performance comparing to a homogeneous ternary environment. This system consists a mix of clients with full-precision and resource-constrained clients with ternary ML models. However, aggregating models with ternary and full-precision weights using traditional aggregation approaches present a challenge due to the disparity in weight magnitudes. In order to obtain an improved accuracy, we use a deep RL model to explore and optimize the amount of contribution assigned to each client’s model for aggregation in each iteration. We evaluate and compare accuracy and communication overhead of the proposed approach against the prior work for the classification of MNIST, FMNIST, and CIFAR10 datasets. Evaluation results show that the proposed RLFL system, along with its aggregation technique, outperforms the existing FL approaches in accuracy ranging from 5% to 19% while imposing negligible computation overhead.","PeriodicalId":48827,"journal":{"name":"IEEE Journal on Emerging and Selected Topics in Circuits and Systems","volume":"14 4","pages":"673-687"},"PeriodicalIF":3.7,"publicationDate":"2024-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142821201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Reinforcement Learning-Based ELF Adversarial Malicious Sample Generation Method 基于强化学习的 ELF 恶意样本生成方法
IF 3.7 2区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-10-15 DOI: 10.1109/JETCAS.2024.3481273
Mingfu Xue;Jinlong Fu;Zhiyuan Li;Shifeng Ni;Heyi Wu;Leo Yu Zhang;Yushu Zhang;Weiqiang Liu
In recent years, domestic Linux operating systems have developed rapidly, but the threat of ELF viruses has become increasingly prominent. Currently, domestic antivirus software for information technology application innovation (ITAI) operating systems shows insufficient capability in detecting ELF viruses. At the same time, research on generating malicious samples in ELF format is scarce. In order to fill this gap at home and abroad and meet the growing application needs of domestic antivirus software companies, this paper proposes an automatic ELF adversarial malicious samples generation technique based on reinforcement learning. Based on reinforcement learning framework, after being processed by cycles of feature extraction, malicious detection, agent decision-making, and evade-detection operation, the sample can evade the detection of antivirus engines. Specifically, nine feature extractor subclasses are used to extract features in multiple aspects. The PPO algorithm is used as the agent algorithm. The action table in the evade-detection module contains 11 evade-detection operations for ELF malicious samples. This method is experimentally verified on the ITAI operating system, and the ELF malicious sample set on the Linux x86 platform is used as the original sample set. The detection rate of this sample set by ClamAV before processing is 98%, and the detection rate drops to 25% after processing. The detection rate of this sample set by 360 Security before processing is 4%, and the detection rate drops to 1% after processing. Furthermore, after processing, the average number of engines on VirusTotal that could detect the maliciousness of the samples decreases from 39 to 15. Many malicious samples were detected by $41sim 43$ engines on VirusTotal before processing, while after the evade-detection processing, only $8sim 9$ engines on VirusTotal can detect the malware. In terms of executability and malicious function consistency, the processed samples can still run normally and the malicious functions remain consistent with those before processing. Overall, the proposed method in this paper can effectively generate adversarial ELF malware samples. Using this method to generate malicious samples to test and train the anti-virus software can promote and improve anti-virus software’s detection and defense capability against malware.
近年来,国产Linux操作系统发展迅速,但ELF病毒的威胁也日益突出。目前国内ITAI (information technology application innovation)操作系统杀毒软件对ELF病毒的检测能力不足。同时,对ELF格式恶意样本的生成研究较少。为了填补国内外这一空白,满足国内杀毒软件公司日益增长的应用需求,本文提出了一种基于强化学习的ELF对抗性恶意样本自动生成技术。基于强化学习框架,经过特征提取、恶意检测、代理决策、规避检测等循环处理后,样本可以规避反病毒引擎的检测。具体来说,使用9个特征提取器子类来提取多个方面的特征。agent算法采用PPO算法。规避检测模块中的动作表包含了11个针对ELF恶意样本的规避检测操作。该方法在ITAI操作系统上进行了实验验证,并以Linux x86平台上的ELF恶意样本集作为原始样本集。ClamAV在处理前对该样品设定的检出率为98%,处理后检出率降至25%。360 Security在处理前设置的该样品的检出率为4%,处理后检出率降至1%。此外,经过处理后,VirusTotal上能够检测到恶意样本的平均引擎数量从39个减少到15个。许多恶意样本在处理前被VirusTotal上$41sim $ 43$的引擎检测到,而在逃避检测处理后,只有VirusTotal上$8sim $ 9$的引擎可以检测到恶意软件。在可执行性和恶意函数一致性方面,处理后的样本仍能正常运行,恶意函数与处理前的样本保持一致。总的来说,本文提出的方法可以有效地生成对抗性ELF恶意软件样本。利用该方法生成恶意样本对杀毒软件进行测试和训练,可以提升和提高杀毒软件对恶意软件的检测和防御能力。
{"title":"A Reinforcement Learning-Based ELF Adversarial Malicious Sample Generation Method","authors":"Mingfu Xue;Jinlong Fu;Zhiyuan Li;Shifeng Ni;Heyi Wu;Leo Yu Zhang;Yushu Zhang;Weiqiang Liu","doi":"10.1109/JETCAS.2024.3481273","DOIUrl":"https://doi.org/10.1109/JETCAS.2024.3481273","url":null,"abstract":"In recent years, domestic Linux operating systems have developed rapidly, but the threat of ELF viruses has become increasingly prominent. Currently, domestic antivirus software for information technology application innovation (ITAI) operating systems shows insufficient capability in detecting ELF viruses. At the same time, research on generating malicious samples in ELF format is scarce. In order to fill this gap at home and abroad and meet the growing application needs of domestic antivirus software companies, this paper proposes an automatic ELF adversarial malicious samples generation technique based on reinforcement learning. Based on reinforcement learning framework, after being processed by cycles of feature extraction, malicious detection, agent decision-making, and evade-detection operation, the sample can evade the detection of antivirus engines. Specifically, nine feature extractor subclasses are used to extract features in multiple aspects. The PPO algorithm is used as the agent algorithm. The action table in the evade-detection module contains 11 evade-detection operations for ELF malicious samples. This method is experimentally verified on the ITAI operating system, and the ELF malicious sample set on the Linux x86 platform is used as the original sample set. The detection rate of this sample set by ClamAV before processing is 98%, and the detection rate drops to 25% after processing. The detection rate of this sample set by 360 Security before processing is 4%, and the detection rate drops to 1% after processing. Furthermore, after processing, the average number of engines on VirusTotal that could detect the maliciousness of the samples decreases from 39 to 15. Many malicious samples were detected by \u0000<inline-formula> <tex-math>$41sim 43$ </tex-math></inline-formula>\u0000 engines on VirusTotal before processing, while after the evade-detection processing, only \u0000<inline-formula> <tex-math>$8sim 9$ </tex-math></inline-formula>\u0000 engines on VirusTotal can detect the malware. In terms of executability and malicious function consistency, the processed samples can still run normally and the malicious functions remain consistent with those before processing. Overall, the proposed method in this paper can effectively generate adversarial ELF malware samples. Using this method to generate malicious samples to test and train the anti-virus software can promote and improve anti-virus software’s detection and defense capability against malware.","PeriodicalId":48827,"journal":{"name":"IEEE Journal on Emerging and Selected Topics in Circuits and Systems","volume":"14 4","pages":"743-757"},"PeriodicalIF":3.7,"publicationDate":"2024-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142821272","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RobustDA: Lightweight Robust Domain Adaptation for Evolving Data at Edge 面向边缘数据演化的轻量级鲁棒域自适应
IF 3.7 2区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-10-11 DOI: 10.1109/JETCAS.2024.3478359
Xinyu Guo;Xiaojiang Zuo;Rui Han;Junyan Ouyang;Jing Xie;Chi Harold Liu;Qinglong Zhang;Ying Guo;Jing Chen;Lydia Y. Chen
AI applications powered by deep learning models are increasingly run natively at edge. A deployed model not only encounters continuously evolving input distributions (domains) but also faces adversarial attacks from third-party. This necessitates adapting the model to shifting domains to maintain high natural accuracy, while avoiding degrading the model’s robust accuracy. However, existing domain adaptation and adversarial attack preventation techniques often have conflicting optimization objectives and they rely on time-consuming training process. This paper presents RobustDA, an on-device lightweight approach that co-optimizes natural and robust accuracies in model retraining. It uses a set of low-rank adapters to retain all learned domains’ knowledge with small overheads. In each model retraining, RobustDA constructs an adapter to separate domain-related and robust-related model parameters to avoid their conflicts in updating. Based on the retained knowledge, it quickly generates adversarial examples with high-quality pseudo-labels and uses them to accelerate the retraining process. We demonstrate that, comparing against 14 state-of-the-art DA techniques under 7 prevalent adversarial attacks on edge devices, the proposed co-optimization approach improves natural and robust accuracies by 6.34% and 11.41% simultaneously. Under the same accuracy, RobustDA also speeds up the retraining process by 4.09x.
由深度学习模型驱动的人工智能应用越来越多地在边缘本地运行。已部署的模型不仅会遇到不断发展的输入分布(域),还会面临来自第三方的对抗性攻击。这就需要使模型适应变换域,以保持较高的自然精度,同时避免降低模型的鲁棒精度。然而,现有的领域自适应和对抗性攻击防御技术往往具有相互冲突的优化目标,并且依赖于耗时的训练过程。本文提出了RobustDA,这是一种设备上的轻量级方法,可共同优化模型再训练中的自然和鲁棒准确性。它使用一组低级别适配器以较小的开销保留所有已学习领域的知识。在每次模型再训练中,RobustDA构建一个适配器来分离领域相关和鲁棒相关的模型参数,以避免它们在更新时的冲突。基于保留的知识,快速生成具有高质量伪标签的对抗样例,并使用它们来加速再训练过程。我们证明,在针对边缘设备的7种常见对抗性攻击下,与14种最先进的数据处理技术相比,所提出的协同优化方法同时将自然和鲁棒准确率提高了6.34%和11.41%。在相同的精度下,RobustDA还将再训练过程加快了4.09倍。
{"title":"RobustDA: Lightweight Robust Domain Adaptation for Evolving Data at Edge","authors":"Xinyu Guo;Xiaojiang Zuo;Rui Han;Junyan Ouyang;Jing Xie;Chi Harold Liu;Qinglong Zhang;Ying Guo;Jing Chen;Lydia Y. Chen","doi":"10.1109/JETCAS.2024.3478359","DOIUrl":"https://doi.org/10.1109/JETCAS.2024.3478359","url":null,"abstract":"AI applications powered by deep learning models are increasingly run natively at edge. A deployed model not only encounters continuously evolving input distributions (domains) but also faces adversarial attacks from third-party. This necessitates adapting the model to shifting domains to maintain high natural accuracy, while avoiding degrading the model’s robust accuracy. However, existing domain adaptation and adversarial attack preventation techniques often have conflicting optimization objectives and they rely on time-consuming training process. This paper presents RobustDA, an on-device lightweight approach that co-optimizes natural and robust accuracies in model retraining. It uses a set of low-rank adapters to retain all learned domains’ knowledge with small overheads. In each model retraining, RobustDA constructs an adapter to separate domain-related and robust-related model parameters to avoid their conflicts in updating. Based on the retained knowledge, it quickly generates adversarial examples with high-quality pseudo-labels and uses them to accelerate the retraining process. We demonstrate that, comparing against 14 state-of-the-art DA techniques under 7 prevalent adversarial attacks on edge devices, the proposed co-optimization approach improves natural and robust accuracies by 6.34% and 11.41% simultaneously. Under the same accuracy, RobustDA also speeds up the retraining process by 4.09x.","PeriodicalId":48827,"journal":{"name":"IEEE Journal on Emerging and Selected Topics in Circuits and Systems","volume":"14 4","pages":"688-704"},"PeriodicalIF":3.7,"publicationDate":"2024-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142821199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Auditing and Generating Synthetic Data With Controllable Trust Trade-Offs 审计和生成具有可控信任权衡的合成数据
IF 3.7 2区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-10-10 DOI: 10.1109/JETCAS.2024.3477976
Brian Belgodere;Pierre Dognin;Adam Ivankay;Igor Melnyk;Youssef Mroueh;Aleksandra Mojsilović;Jiri Navratil;Apoorva Nitsure;Inkit Padhi;Mattia Rigotti;Jerret Ross;Yair Schiff;Radhika Vedpathak;Richard A. Young
Real-world data often exhibits bias, imbalance, and privacy risks. Synthetic datasets have emerged to address these issues by enabling a paradigm that relies on generative AI models to generate unbiased, privacy-preserving data while maintaining fidelity to the original data. However, assessing the trustworthiness of synthetic datasets and models is a critical challenge. We introduce a holistic auditing framework that comprehensively evaluates synthetic datasets and AI models. It focuses on preventing bias and discrimination, ensuring fidelity to the source data, and assessing utility, robustness, and privacy preservation. We demonstrate our framework’s effectiveness by auditing various generative models across diverse use cases like education, healthcare, banking, and human resources, spanning different data modalities such as tabular, time-series, vision, and natural language. This holistic assessment is essential for compliance with regulatory safeguards. We introduce a trustworthiness index to rank synthetic datasets based on their safeguards trade-offs. Furthermore, we present a trustworthiness-driven model selection and cross-validation process during training, exemplified with “TrustFormers” across various data types. This approach allows for controllable trustworthiness trade-offs in synthetic data creation. Our auditing framework fosters collaboration among stakeholders, including data scientists, governance experts, internal reviewers, external certifiers, and regulators. This transparent reporting should become a standard practice to prevent bias, discrimination, and privacy violations, ensuring compliance with policies and providing accountability, safety, and performance guarantees.
现实世界的数据往往存在偏差、不平衡和隐私风险。为了解决这些问题,合成数据集应运而生,这种模式依靠生成式人工智能模型生成无偏见、保护隐私的数据,同时保持与原始数据的保真度。然而,评估合成数据集和模型的可信度是一项严峻的挑战。我们引入了一个整体审核框架,可全面评估合成数据集和人工智能模型。它侧重于防止偏见和歧视,确保忠于源数据,以及评估实用性、稳健性和隐私保护。我们通过审核教育、医疗保健、银行和人力资源等不同使用案例中的各种生成模型,以及表格、时间序列、视觉和自然语言等不同数据模式,展示了我们框架的有效性。这种整体评估对于遵守监管保障措施至关重要。我们引入了一种可信度指数,可根据合成数据集的保障措施权衡对其进行排序。此外,我们还介绍了在训练过程中以可信度为导向的模型选择和交叉验证过程,并在各种数据类型中以 "TrustFormers "为例进行说明。这种方法允许在创建合成数据时进行可控的可信度权衡。我们的审核框架促进了利益相关者之间的合作,包括数据科学家、治理专家、内部审核人员、外部认证人员和监管机构。这种透明的报告应成为防止偏见、歧视和侵犯隐私的标准做法,确保符合政策并提供责任、安全和性能保证。
{"title":"Auditing and Generating Synthetic Data With Controllable Trust Trade-Offs","authors":"Brian Belgodere;Pierre Dognin;Adam Ivankay;Igor Melnyk;Youssef Mroueh;Aleksandra Mojsilović;Jiri Navratil;Apoorva Nitsure;Inkit Padhi;Mattia Rigotti;Jerret Ross;Yair Schiff;Radhika Vedpathak;Richard A. Young","doi":"10.1109/JETCAS.2024.3477976","DOIUrl":"https://doi.org/10.1109/JETCAS.2024.3477976","url":null,"abstract":"Real-world data often exhibits bias, imbalance, and privacy risks. Synthetic datasets have emerged to address these issues by enabling a paradigm that relies on generative AI models to generate unbiased, privacy-preserving data while maintaining fidelity to the original data. However, assessing the trustworthiness of synthetic datasets and models is a critical challenge. We introduce a holistic auditing framework that comprehensively evaluates synthetic datasets and AI models. It focuses on preventing bias and discrimination, ensuring fidelity to the source data, and assessing utility, robustness, and privacy preservation. We demonstrate our framework’s effectiveness by auditing various generative models across diverse use cases like education, healthcare, banking, and human resources, spanning different data modalities such as tabular, time-series, vision, and natural language. This holistic assessment is essential for compliance with regulatory safeguards. We introduce a trustworthiness index to rank synthetic datasets based on their safeguards trade-offs. Furthermore, we present a trustworthiness-driven model selection and cross-validation process during training, exemplified with “TrustFormers” across various data types. This approach allows for controllable trustworthiness trade-offs in synthetic data creation. Our auditing framework fosters collaboration among stakeholders, including data scientists, governance experts, internal reviewers, external certifiers, and regulators. This transparent reporting should become a standard practice to prevent bias, discrimination, and privacy violations, ensuring compliance with policies and providing accountability, safety, and performance guarantees.","PeriodicalId":48827,"journal":{"name":"IEEE Journal on Emerging and Selected Topics in Circuits and Systems","volume":"14 4","pages":"773-788"},"PeriodicalIF":3.7,"publicationDate":"2024-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10713321","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142821276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Overview of Trustworthy AI: Advances in IP Protection, Privacy-Preserving Federated Learning, Security Verification, and GAI Safety Alignment 可信人工智能概述:知识产权保护、保护隐私的联合学习、安全验证和 GAI 安全调整方面的进展
IF 3.7 2区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-10-09 DOI: 10.1109/JETCAS.2024.3477348
Yue Zheng;Chip-Hong Chang;Shih-Hsu Huang;Pin-Yu Chen;Stjepan Picek
AI has undergone a remarkable evolution journey marked by groundbreaking milestones. Like any powerful tool, it can be turned into a weapon for devastation in the wrong hands. Understanding that no model is perfect, trustworthy AI is initiated with an intuitive aim to mitigate the harm it can inflict on people and society by prioritizing socially responsible AI ideation, design, development, and deployment towards effecting positive changes. The scope of trustworthy AI is encompassing, covering qualities such as safety, security, privacy, transparency, explainability, fairness, impartiality, robustness, reliability, and accountability. This overview paper anchors on recent advances in four research hotspots of trustworthy AI with compelling and challenging security, privacy, and safety issues. The topics discussed include the intellectual property protection of deep learning and generative models, the trustworthiness of federated learning, verification and testing tools of AI systems, and the safety alignment of generative AI systems. Through this comprehensive review, we aim to provide readers with an overview of the most up-to-date research problems and solutions. By presenting the rapidly evolving factors and constraints that motivate the emerging attack and defense strategies throughout the AI life-cycle, we hope to inspire more research effort into guiding AI technologies towards beneficial purposes with greater robustness against malicious use intent.
人工智能经历了一段非凡的进化历程,取得了开创性的里程碑式成就。与任何强大的工具一样,它也可能在错误的人手中变成毁灭性的武器。由于认识到没有一种模式是十全十美的,可信赖的人工智能的出发点是通过优先考虑具有社会责任感的人工智能构思、设计、开发和部署来实现积极的变革,从而减轻人工智能可能对人类和社会造成的伤害。可信赖的人工智能范围广泛,涵盖安全、保障、隐私、透明、可解释、公平、公正、稳健、可靠和问责等品质。本综述论文主要介绍了可信人工智能四个研究热点的最新进展,这些热点涉及引人注目且极具挑战性的安全、隐私和保障问题。讨论的主题包括深度学习和生成模型的知识产权保护、联合学习的可信性、人工智能系统的验证和测试工具,以及生成式人工智能系统的安全调整。通过这篇综合评论,我们旨在为读者提供最新研究问题和解决方案的概览。通过介绍在整个人工智能生命周期中促使新兴攻击和防御策略出现的快速发展的因素和限制,我们希望激励更多的研究人员努力引导人工智能技术实现有益的目的,并具有更强的抵御恶意使用意图的能力。
{"title":"An Overview of Trustworthy AI: Advances in IP Protection, Privacy-Preserving Federated Learning, Security Verification, and GAI Safety Alignment","authors":"Yue Zheng;Chip-Hong Chang;Shih-Hsu Huang;Pin-Yu Chen;Stjepan Picek","doi":"10.1109/JETCAS.2024.3477348","DOIUrl":"https://doi.org/10.1109/JETCAS.2024.3477348","url":null,"abstract":"AI has undergone a remarkable evolution journey marked by groundbreaking milestones. Like any powerful tool, it can be turned into a weapon for devastation in the wrong hands. Understanding that no model is perfect, trustworthy AI is initiated with an intuitive aim to mitigate the harm it can inflict on people and society by prioritizing socially responsible AI ideation, design, development, and deployment towards effecting positive changes. The scope of trustworthy AI is encompassing, covering qualities such as safety, security, privacy, transparency, explainability, fairness, impartiality, robustness, reliability, and accountability. This overview paper anchors on recent advances in four research hotspots of trustworthy AI with compelling and challenging security, privacy, and safety issues. The topics discussed include the intellectual property protection of deep learning and generative models, the trustworthiness of federated learning, verification and testing tools of AI systems, and the safety alignment of generative AI systems. Through this comprehensive review, we aim to provide readers with an overview of the most up-to-date research problems and solutions. By presenting the rapidly evolving factors and constraints that motivate the emerging attack and defense strategies throughout the AI life-cycle, we hope to inspire more research effort into guiding AI technologies towards beneficial purposes with greater robustness against malicious use intent.","PeriodicalId":48827,"journal":{"name":"IEEE Journal on Emerging and Selected Topics in Circuits and Systems","volume":"14 4","pages":"582-607"},"PeriodicalIF":3.7,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10711270","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142821270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Diffense: Defense Against Backdoor Attacks on Deep Neural Networks With Latent Diffusion 差分:基于潜在扩散的深度神经网络后门攻击防御
IF 3.7 2区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-09-27 DOI: 10.1109/JETCAS.2024.3469377
Bowen Hu;Chip-Hong Chang
As deep neural network (DNN) models are used in a wide variety of applications, their security has attracted considerable attention. Among the known security vulnerabilities, backdoor attacks have become the most notorious threat to users of pre-trained DNNs and machine learning services. Such attacks manipulate the training data or training process in such a way that the trained model produces a false output to an input that carries a specific trigger, but behaves normally otherwise. In this work, we propose Diffense, a method for detecting such malicious inputs based on the distribution of the latent feature maps to clean input samples of the possibly infected target DNN. By learning the feature map distribution using the diffusion model and sampling from the model under the guidance of the data to be inspected, backdoor attack data can be detected by its distance from the sampled result. Diffense does not require knowledge about the structure, weights, and training data of the target DNN model, nor does it need to be aware of the backdoor attack method. Diffense is non-intrusive. The accuracy of the target model to clean inputs will not be affected by Diffense and the inference service can be run uninterruptedly with Diffense. Extensive experiments were conducted on DNNs trained for MNIST, CIFRA-10, GSTRB, ImageNet-10, LSUN Object and LSUN Scene applications to show that the attack success rates of diverse backdoor attacks, including BadNets, IDBA, WaNet, ISSBA and HTBA, can be significantly suppressed by Diffense. The results generally exceed the performances of existing backdoor mitigation methods, including those that require model modifications or prerequisite knowledge of model weights or attack samples.
随着深度神经网络(DNN)模型的广泛应用,其安全性引起了人们的广泛关注。在已知的安全漏洞中,后门攻击已成为预训练dnn和机器学习服务用户最臭名昭著的威胁。这种攻击以这样一种方式操纵训练数据或训练过程,即训练模型对带有特定触发器的输入产生错误输出,但行为正常。在这项工作中,我们提出了Diffense,一种基于潜在特征映射的分布来检测这种恶意输入的方法,以清除可能被感染的目标DNN的输入样本。利用扩散模型学习特征映射分布,并在待检测数据的指导下对模型进行采样,通过与采样结果的距离来检测后门攻击数据。diffence不需要了解目标DNN模型的结构、权重和训练数据,也不需要知道后门攻击方法。diffence是非侵入性的。目标模型清理输入的准确性不会受到Diffense的影响,并且推理服务可以使用Diffense不间断地运行。在MNIST、CIFRA-10、GSTRB、ImageNet-10、LSUN Object和LSUN Scene应用中训练的dnn上进行了大量实验,结果表明Diffense可以显著抑制包括BadNets、IDBA、WaNet、ISSBA和HTBA在内的各种后门攻击的攻击成功率。结果通常超过现有后门缓解方法的性能,包括那些需要修改模型或预先了解模型权重或攻击样本的方法。
{"title":"Diffense: Defense Against Backdoor Attacks on Deep Neural Networks With Latent Diffusion","authors":"Bowen Hu;Chip-Hong Chang","doi":"10.1109/JETCAS.2024.3469377","DOIUrl":"https://doi.org/10.1109/JETCAS.2024.3469377","url":null,"abstract":"As deep neural network (DNN) models are used in a wide variety of applications, their security has attracted considerable attention. Among the known security vulnerabilities, backdoor attacks have become the most notorious threat to users of pre-trained DNNs and machine learning services. Such attacks manipulate the training data or training process in such a way that the trained model produces a false output to an input that carries a specific trigger, but behaves normally otherwise. In this work, we propose Diffense, a method for detecting such malicious inputs based on the distribution of the latent feature maps to clean input samples of the possibly infected target DNN. By learning the feature map distribution using the diffusion model and sampling from the model under the guidance of the data to be inspected, backdoor attack data can be detected by its distance from the sampled result. Diffense does not require knowledge about the structure, weights, and training data of the target DNN model, nor does it need to be aware of the backdoor attack method. Diffense is non-intrusive. The accuracy of the target model to clean inputs will not be affected by Diffense and the inference service can be run uninterruptedly with Diffense. Extensive experiments were conducted on DNNs trained for MNIST, CIFRA-10, GSTRB, ImageNet-10, LSUN Object and LSUN Scene applications to show that the attack success rates of diverse backdoor attacks, including BadNets, IDBA, WaNet, ISSBA and HTBA, can be significantly suppressed by Diffense. The results generally exceed the performances of existing backdoor mitigation methods, including those that require model modifications or prerequisite knowledge of model weights or attack samples.","PeriodicalId":48827,"journal":{"name":"IEEE Journal on Emerging and Selected Topics in Circuits and Systems","volume":"14 4","pages":"729-742"},"PeriodicalIF":3.7,"publicationDate":"2024-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142821171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient Artificial Intelligence With Novel Matrix Transformations and Homomorphic Encryption 利用新型矩阵变换和同态加密实现高效人工智能
IF 3.7 2区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-09-24 DOI: 10.1109/JETCAS.2024.3466849
Quoc Bao Phan;Tuy Tan Nguyen
This paper addresses the challenges of data privacy and computational efficiency in artificial intelligence (AI) models by proposing a novel hybrid model that combines homomorphic encryption (HE) with AI to enhance security while maintaining learning accuracy. The novelty of our model lies in the introduction of a new matrix transformation technique that ensures compatibility with both HE algorithms and AI model weight matrices, significantly improving computational efficiency. Furthermore, we present a first-of-its-kind mathematical proof of convergence for integrating HE into AI models using the adaptive moment estimation optimization algorithm. The effectiveness and practicality of our approach for training on encrypted data are showcased through comprehensive evaluations of well-known datasets for air pollution forecasting and forest fire detection. These successful results demonstrate high model performance, with nearly 1 R-squared for air pollution forecasting and 99% accuracy for forest fire detection. Additionally, our approach achieves a reduction of up to 90% in data storage and a tenfold increase in speed compared to models that do not use the matrix transformation method. Our primary contribution lies in enhancing the security, efficiency, and dependability of AI models, particularly when dealing with sensitive data.
本文针对人工智能(AI)模型在数据隐私和计算效率方面的挑战,提出了一种新型混合模型,将同态加密(HE)与人工智能相结合,在保持学习准确性的同时增强安全性。我们模型的新颖之处在于引入了一种新的矩阵变换技术,它能确保同态加密算法和人工智能模型权重矩阵的兼容性,从而显著提高计算效率。此外,我们还首次提出了利用自适应矩估计优化算法将 HE 整合到人工智能模型中的收敛性数学证明。通过对空气污染预测和森林火灾检测等知名数据集的全面评估,我们展示了在加密数据上进行训练的有效性和实用性。这些成功的结果证明了模型的高性能,空气污染预测的 R 平方接近 1,森林火灾检测的准确率达到 99%。此外,与不使用矩阵变换方法的模型相比,我们的方法减少了多达 90% 的数据存储,速度提高了 10 倍。我们的主要贡献在于提高了人工智能模型的安全性、效率和可靠性,尤其是在处理敏感数据时。
{"title":"Efficient Artificial Intelligence With Novel Matrix Transformations and Homomorphic Encryption","authors":"Quoc Bao Phan;Tuy Tan Nguyen","doi":"10.1109/JETCAS.2024.3466849","DOIUrl":"https://doi.org/10.1109/JETCAS.2024.3466849","url":null,"abstract":"This paper addresses the challenges of data privacy and computational efficiency in artificial intelligence (AI) models by proposing a novel hybrid model that combines homomorphic encryption (HE) with AI to enhance security while maintaining learning accuracy. The novelty of our model lies in the introduction of a new matrix transformation technique that ensures compatibility with both HE algorithms and AI model weight matrices, significantly improving computational efficiency. Furthermore, we present a first-of-its-kind mathematical proof of convergence for integrating HE into AI models using the adaptive moment estimation optimization algorithm. The effectiveness and practicality of our approach for training on encrypted data are showcased through comprehensive evaluations of well-known datasets for air pollution forecasting and forest fire detection. These successful results demonstrate high model performance, with nearly 1 R-squared for air pollution forecasting and 99% accuracy for forest fire detection. Additionally, our approach achieves a reduction of up to 90% in data storage and a tenfold increase in speed compared to models that do not use the matrix transformation method. Our primary contribution lies in enhancing the security, efficiency, and dependability of AI models, particularly when dealing with sensitive data.","PeriodicalId":48827,"journal":{"name":"IEEE Journal on Emerging and Selected Topics in Circuits and Systems","volume":"14 4","pages":"717-728"},"PeriodicalIF":3.7,"publicationDate":"2024-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142821170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Re_useVFL: Reuse of Parameters-Based Verifiable Federated Learning With Privacy Preservation Using Gradient Sparsification Re_useVFL:使用梯度稀疏化重用基于参数的可验证联合学习并保护隐私
IF 3.7 2区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-09-19 DOI: 10.1109/JETCAS.2024.3463738
Ningxin He;Tiegang Gao;Chuan Zhou
Federated learning (FL) exhibits promising potential in the Industrial Internet of Things (IIoT) as it allows multiple institutions to collaboratively train a global model without sharing local data. However, there are still many privacy and security concerns in FL systems. The cloud server responsible for aggregating model parameters may be malicious, and it may distribute manipulated aggregation results that could launch nefarious attacks. Additionally, industrial agents may provide incomplete parameters, negatively impacting the global model’s performance. To address these issues, we introduce Re_useVFL, an efficient privacy-preserving full-process FL verification scheme. It integrates BLS-based signature verification, adaptive gradient sparsification (AdaGS), and Multi-Key CKKS encryption (MK-CKKS). Our scheme ensures the integrity of agents-uploaded parameters, the correctness of the cloud server’s aggregation results, and the consistency verification of distributed results, thereby providing comprehensive verification across the entire FL process. It also maintains validation accuracy even with some agents dropout during computation. The AdaGS algorithm notably reduces validation overhead by optimizing parameter sparsification and reuse. Additionally, employing MK-CKKS to protect agents privacy and prevent agent and server collusion. Our experiments on three datasets confirm that Re_useVFL achieves lower validation resource overhead compared to existing methods, demonstrating its practical effectiveness.
联合学习(FL)在工业物联网(IIoT)中展现出了巨大的潜力,因为它允许多个机构在不共享本地数据的情况下合作训练一个全球模型。然而,FL 系统仍然存在许多隐私和安全问题。负责聚合模型参数的云服务器可能是恶意的,它可能会发布被操纵的聚合结果,从而发起恶意攻击。此外,工业代理可能会提供不完整的参数,从而对全局模型的性能产生负面影响。为了解决这些问题,我们引入了 Re_useVFL,这是一种高效的隐私保护全流程 FL 验证方案。它集成了基于 BLS 的签名验证、自适应梯度稀疏化(AdaGS)和多密钥 CKKS 加密(MK-CKKS)。我们的方案确保了代理上传参数的完整性、云服务器聚合结果的正确性以及分布式结果的一致性验证,从而为整个 FL 流程提供了全面的验证。即使有些代理在计算过程中退出,它也能保持验证的准确性。AdaGS 算法通过优化参数稀疏化和重复使用,显著降低了验证开销。此外,该算法还采用了 MK-CKKS 来保护代理隐私,防止代理与服务器串通。我们在三个数据集上进行的实验证实,与现有方法相比,Re_useVFL 能够实现更低的验证资源开销,这证明了它的实用有效性。
{"title":"Re_useVFL: Reuse of Parameters-Based Verifiable Federated Learning With Privacy Preservation Using Gradient Sparsification","authors":"Ningxin He;Tiegang Gao;Chuan Zhou","doi":"10.1109/JETCAS.2024.3463738","DOIUrl":"https://doi.org/10.1109/JETCAS.2024.3463738","url":null,"abstract":"Federated learning (FL) exhibits promising potential in the Industrial Internet of Things (IIoT) as it allows multiple institutions to collaboratively train a global model without sharing local data. However, there are still many privacy and security concerns in FL systems. The cloud server responsible for aggregating model parameters may be malicious, and it may distribute manipulated aggregation results that could launch nefarious attacks. Additionally, industrial agents may provide incomplete parameters, negatively impacting the global model’s performance. To address these issues, we introduce Re_useVFL, an efficient privacy-preserving full-process FL verification scheme. It integrates BLS-based signature verification, adaptive gradient sparsification (AdaGS), and Multi-Key CKKS encryption (MK-CKKS). Our scheme ensures the integrity of agents-uploaded parameters, the correctness of the cloud server’s aggregation results, and the consistency verification of distributed results, thereby providing comprehensive verification across the entire FL process. It also maintains validation accuracy even with some agents dropout during computation. The AdaGS algorithm notably reduces validation overhead by optimizing parameter sparsification and reuse. Additionally, employing MK-CKKS to protect agents privacy and prevent agent and server collusion. Our experiments on three datasets confirm that Re_useVFL achieves lower validation resource overhead compared to existing methods, demonstrating its practical effectiveness.","PeriodicalId":48827,"journal":{"name":"IEEE Journal on Emerging and Selected Topics in Circuits and Systems","volume":"14 4","pages":"647-660"},"PeriodicalIF":3.7,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142821232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Journal on Emerging and Selected Topics in Circuits and Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1