首页 > 最新文献

IEEE Journal on Emerging and Selected Topics in Circuits and Systems最新文献

英文 中文
Decision Guided Robust DL Classification of Adversarial Images Combining Weaker Defenses 决策引导下结合弱防御的对抗图像鲁棒深度学习分类
IF 3.7 2区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-11-13 DOI: 10.1109/JETCAS.2024.3497295
Shubhajit Datta;Manaar Alam;Arijit Mondal;Debdeep Mukhopadhyay;Partha Pratim Chakrabarti
Adversarial examples make Deep Learning (DL) models vulnerable to safe deployment in practical systems. Although several techniques have been proposed in the literature, defending against adversarial attacks is still challenging. The current work identifies weaknesses of traditional strategies in detecting and classifying adversarial examples. To overcome these limitations, we carefully analyze techniques like binary detector and ensemble method, and compose them in a manner which mitigates the limitations. We also effectively develop a re-attack strategy, a randomization technique called RRP (Random Resizing and Patch-removing), and a rule-based decision method. Our proposed method, BEARR (Binary detector with Ensemble and re-Attacking scheme including Randomization and Rule-based decision technique) detects adversarial examples as well as classifies those examples with a higher accuracy compared to contemporary methods. We evaluate BEARR on standard image classification datasets: CIFAR-10, CIFAR-100, and tiny-imagenet as well as two real-world datasets: plantvillage and chest X-ray in the presence of state-of-the-art adversarial attack techniques. We have also validated BEARR against a more potent attacker who has perfect knowledge of the protection mechanism. We observe that BEARR is significantly better than existing methods in the context of detection and classification accuracy of adversarial examples.
对抗性示例使深度学习(DL)模型容易在实际系统中安全部署。尽管文献中提出了几种技术,但防御对抗性攻击仍然具有挑战性。目前的工作确定了传统策略在检测和分类对抗示例方面的弱点。为了克服这些限制,我们仔细分析了二进制探测器和集成方法等技术,并以减轻限制的方式组合它们。我们还有效地开发了一种重新攻击策略,一种称为RRP(随机调整大小和补丁删除)的随机化技术,以及一种基于规则的决策方法。我们提出的方法BEARR(具有集成和重新攻击方案的二进制检测器,包括随机化和基于规则的决策技术)检测对抗性示例,并对这些示例进行分类,与当前方法相比具有更高的准确性。我们在标准图像分类数据集(CIFAR-10、CIFAR-100和tiny-imagenet)以及两个真实世界数据集(plantvillage和胸部x射线)上对bear进行了评估,并采用了最先进的对抗性攻击技术。我们还针对一个更强大的攻击者验证了BEARR,该攻击者对保护机制有着完美的了解。我们观察到,在对抗性样本的检测和分类精度方面,BEARR明显优于现有方法。
{"title":"Decision Guided Robust DL Classification of Adversarial Images Combining Weaker Defenses","authors":"Shubhajit Datta;Manaar Alam;Arijit Mondal;Debdeep Mukhopadhyay;Partha Pratim Chakrabarti","doi":"10.1109/JETCAS.2024.3497295","DOIUrl":"https://doi.org/10.1109/JETCAS.2024.3497295","url":null,"abstract":"Adversarial examples make Deep Learning (DL) models vulnerable to safe deployment in practical systems. Although several techniques have been proposed in the literature, defending against adversarial attacks is still challenging. The current work identifies weaknesses of traditional strategies in detecting and classifying adversarial examples. To overcome these limitations, we carefully analyze techniques like binary detector and ensemble method, and compose them in a manner which mitigates the limitations. We also effectively develop a re-attack strategy, a randomization technique called RRP (Random Resizing and Patch-removing), and a rule-based decision method. Our proposed method, BEARR (Binary detector with Ensemble and re-Attacking scheme including Randomization and Rule-based decision technique) detects adversarial examples as well as classifies those examples with a higher accuracy compared to contemporary methods. We evaluate BEARR on standard image classification datasets: CIFAR-10, CIFAR-100, and tiny-imagenet as well as two real-world datasets: plantvillage and chest X-ray in the presence of state-of-the-art adversarial attack techniques. We have also validated BEARR against a more potent attacker who has perfect knowledge of the protection mechanism. We observe that BEARR is significantly better than existing methods in the context of detection and classification accuracy of adversarial examples.","PeriodicalId":48827,"journal":{"name":"IEEE Journal on Emerging and Selected Topics in Circuits and Systems","volume":"14 4","pages":"758-772"},"PeriodicalIF":3.7,"publicationDate":"2024-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142821198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Systematical Evasion From Learning-Based Microarchitectural Attack Detection Tools 基于学习的微架构攻击检测工具的系统规避
IF 3.7 2区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-11-04 DOI: 10.1109/JETCAS.2024.3491497
Debopriya Roy Dipta;Jonathan Tan;Berk Gulmezoglu
Microarchitectural attacks threaten the security of individuals in a diverse set of platforms, such as personal computers, mobile phones, cloud environments, and AR/VR devices. Chip vendors are struggling to patch every hardware vulnerability in a timely manner, leaving billions of people’s private information under threat. Hence, dynamic attack detection tools which utilize hardware performance counters and machine learning (ML) models, have become popular for detecting ongoing attacks. In this study, we evaluate the robustness of various ML-based detection models with a sophisticated fuzzing framework. The framework manipulates hardware performance counters in a controlled manner using individual fuzzing blocks. Later, the framework is leveraged to modify the microarchitecture attack source code and to evade the detection tools. We evaluate our fuzzing framework with time overhead, achieved leakage rate, and the number of trials to successfully evade the detection.
微架构攻击威胁着各种平台(如个人电脑、移动电话、云环境和AR/VR设备)中的个人安全。芯片供应商正在努力及时修补每一个硬件漏洞,使数十亿人的私人信息受到威胁。因此,利用硬件性能计数器和机器学习(ML)模型的动态攻击检测工具已成为检测正在进行的攻击的流行工具。在这项研究中,我们用一个复杂的模糊框架评估了各种基于ml的检测模型的鲁棒性。该框架使用单个模糊块以受控的方式操作硬件性能计数器。随后,利用该框架修改微体系结构攻击源代码并规避检测工具。我们用时间开销、实现的泄漏率和成功逃避检测的试验次数来评估我们的模糊框架。
{"title":"Systematical Evasion From Learning-Based Microarchitectural Attack Detection Tools","authors":"Debopriya Roy Dipta;Jonathan Tan;Berk Gulmezoglu","doi":"10.1109/JETCAS.2024.3491497","DOIUrl":"https://doi.org/10.1109/JETCAS.2024.3491497","url":null,"abstract":"Microarchitectural attacks threaten the security of individuals in a diverse set of platforms, such as personal computers, mobile phones, cloud environments, and AR/VR devices. Chip vendors are struggling to patch every hardware vulnerability in a timely manner, leaving billions of people’s private information under threat. Hence, dynamic attack detection tools which utilize hardware performance counters and machine learning (ML) models, have become popular for detecting ongoing attacks. In this study, we evaluate the robustness of various ML-based detection models with a sophisticated fuzzing framework. The framework manipulates hardware performance counters in a controlled manner using individual fuzzing blocks. Later, the framework is leveraged to modify the microarchitecture attack source code and to evade the detection tools. We evaluate our fuzzing framework with time overhead, achieved leakage rate, and the number of trials to successfully evade the detection.","PeriodicalId":48827,"journal":{"name":"IEEE Journal on Emerging and Selected Topics in Circuits and Systems","volume":"14 4","pages":"823-833"},"PeriodicalIF":3.7,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142821230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SecureComm: A Secure Data Transfer Framework for Neural Network Inference on CPU-FPGA Heterogeneous Edge Devices SecureComm:用于 CPU-FPGA 异构边缘设备神经网络推理的安全数据传输框架
IF 3.7 2区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-11-04 DOI: 10.1109/JETCAS.2024.3491169
Tian Chen;Yu-An Tan;Chunying Li;Zheng Zhang;Weizhi Meng;Yuanzhang Li
With the increasing popularity of heterogeneous computing systems in Artificial Intelligence (AI) applications, ensuring the confidentiality and integrity of sensitive data transferred between different elements has become a critical challenge. In this paper, we propose an enhanced security framework called SecureComm to protect data transfer between ARM CPU and FPGA through Double Data Rate (DDR) memory on CPU-FPGA heterogeneous platforms. SecureComm extends the SM4 crypto module by incorporating a proposed Message Authentication Code (MAC) to ensure data confidentiality and integrity. It also constructs smart queues in the shared memory of DDR, which work in conjunction with the designed protocols to help schedule data flow and facilitate flexible adaptation to various AI tasks with different data scales. Furthermore, some of the hardware modules of SecureComm are improved and encapsulated as independent IPs to increase their versatility beyond the scope of this paper. We implemented several ARM CPU-FPGA collaborative AI applications to justify the security and evaluate the timing overhead of SecureComm. We also deployed SecureComm to non-AI tasks to demonstrate its versatility, ultimately offering suggestions for its use in tasks of varying data scales.
随着人工智能(AI)应用中异构计算系统的日益普及,确保不同元素之间传输的敏感数据的机密性和完整性已成为一个关键挑战。在本文中,我们提出了一个增强的安全框架SecureComm,以保护CPU-FPGA异构平台上通过双数据速率(DDR)存储器在ARM CPU和FPGA之间的数据传输。SecureComm扩展了SM4加密模块,加入了一个建议的消息认证码(MAC),以确保数据的机密性和完整性。它还在DDR的共享内存中构建智能队列,与设计的协议一起工作,以帮助调度数据流,并促进灵活适应不同数据规模的各种人工智能任务。此外,对SecureComm的一些硬件模块进行了改进,将其封装为独立的ip,以增加其通用性,超出了本文的范围。我们实现了几个ARM CPU-FPGA协作AI应用程序来证明安全性并评估SecureComm的时间开销。我们还将SecureComm部署到非人工智能任务中,以展示其多功能性,最终为其在不同数据规模的任务中的使用提供建议。
{"title":"SecureComm: A Secure Data Transfer Framework for Neural Network Inference on CPU-FPGA Heterogeneous Edge Devices","authors":"Tian Chen;Yu-An Tan;Chunying Li;Zheng Zhang;Weizhi Meng;Yuanzhang Li","doi":"10.1109/JETCAS.2024.3491169","DOIUrl":"https://doi.org/10.1109/JETCAS.2024.3491169","url":null,"abstract":"With the increasing popularity of heterogeneous computing systems in Artificial Intelligence (AI) applications, ensuring the confidentiality and integrity of sensitive data transferred between different elements has become a critical challenge. In this paper, we propose an enhanced security framework called SecureComm to protect data transfer between ARM CPU and FPGA through Double Data Rate (DDR) memory on CPU-FPGA heterogeneous platforms. SecureComm extends the SM4 crypto module by incorporating a proposed Message Authentication Code (MAC) to ensure data confidentiality and integrity. It also constructs smart queues in the shared memory of DDR, which work in conjunction with the designed protocols to help schedule data flow and facilitate flexible adaptation to various AI tasks with different data scales. Furthermore, some of the hardware modules of SecureComm are improved and encapsulated as independent IPs to increase their versatility beyond the scope of this paper. We implemented several ARM CPU-FPGA collaborative AI applications to justify the security and evaluate the timing overhead of SecureComm. We also deployed SecureComm to non-AI tasks to demonstrate its versatility, ultimately offering suggestions for its use in tasks of varying data scales.","PeriodicalId":48827,"journal":{"name":"IEEE Journal on Emerging and Selected Topics in Circuits and Systems","volume":"14 4","pages":"811-822"},"PeriodicalIF":3.7,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142821274","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Variable Resolution Pixel Quantization for Low Power Machine Vision Application on Edge 边缘低功耗机器视觉应用的可变分辨率像素量化
IF 3.7 2区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-11-01 DOI: 10.1109/JETCAS.2024.3490504
Senorita Deb;Sai Sanjeet;Prabir Kumar Biswas;Bibhu Datta Sahoo
This work describes an approach towards pixel quantization using variable resolution which is made feasible using image transformation in the analog domain. The main aim is to reduce the average bits-per-pixel (BPP) necessary for representing an image while maintaining the classification accuracy of a Convolutional Neural Network (CNN) that is trained for image classification. The proposed algorithm is based on the Hadamard transform that leads to a low-resolution variable quantization by the analog-to-digital converter (ADC) thus reducing the power dissipation in hardware at the sensor node. Despite the trade-offs inherent in image transformation, the proposed algorithm achieves competitive accuracy levels across various image sizes and ADC configurations, highlighting the importance of considering both accuracy and power consumption in edge computing applications. The schematic of a novel 1.5 bit ADC that incorporates the Hadamard transform is also proposed. A hardware implementation of the analog transformation followed by software-based variable quantization is done for the CIFAR-10 test dataset. The digitized data shows that the network can still identify transformed images with a remarkable 90% accuracy for 3-BPP transformed images following the proposed method.
这项工作描述了一种使用可变分辨率的像素量化方法,该方法在模拟域中使用图像变换实现。主要目的是减少表示图像所需的平均每像素比特数(BPP),同时保持用于图像分类训练的卷积神经网络(CNN)的分类准确性。该算法基于Hadamard变换,通过模数转换器(ADC)实现低分辨率可变量化,从而降低了传感器节点硬件的功耗。尽管图像变换中存在固有的权衡,但所提出的算法在各种图像尺寸和ADC配置中实现了具有竞争力的精度水平,突出了在边缘计算应用中同时考虑精度和功耗的重要性。文中还提出了一种新型的集成了阿达玛变换的1.5位ADC的原理图。对CIFAR-10测试数据集进行了模拟转换的硬件实现,然后进行了基于软件的变量量化。数字化数据表明,对于3-BPP变换后的图像,采用该方法后的网络识别准确率仍达到90%。
{"title":"Variable Resolution Pixel Quantization for Low Power Machine Vision Application on Edge","authors":"Senorita Deb;Sai Sanjeet;Prabir Kumar Biswas;Bibhu Datta Sahoo","doi":"10.1109/JETCAS.2024.3490504","DOIUrl":"https://doi.org/10.1109/JETCAS.2024.3490504","url":null,"abstract":"This work describes an approach towards pixel quantization using variable resolution which is made feasible using image transformation in the analog domain. The main aim is to reduce the average bits-per-pixel (BPP) necessary for representing an image while maintaining the classification accuracy of a Convolutional Neural Network (CNN) that is trained for image classification. The proposed algorithm is based on the Hadamard transform that leads to a low-resolution variable quantization by the analog-to-digital converter (ADC) thus reducing the power dissipation in hardware at the sensor node. Despite the trade-offs inherent in image transformation, the proposed algorithm achieves competitive accuracy levels across various image sizes and ADC configurations, highlighting the importance of considering both accuracy and power consumption in edge computing applications. The schematic of a novel 1.5 bit ADC that incorporates the Hadamard transform is also proposed. A hardware implementation of the analog transformation followed by software-based variable quantization is done for the CIFAR-10 test dataset. The digitized data shows that the network can still identify transformed images with a remarkable 90% accuracy for 3-BPP transformed images following the proposed method.","PeriodicalId":48827,"journal":{"name":"IEEE Journal on Emerging and Selected Topics in Circuits and Systems","volume":"15 1","pages":"58-71"},"PeriodicalIF":3.7,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143601937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Extracting DNN Architectures via Runtime Profiling on Mobile GPUs 在移动 GPU 上通过运行时剖析提取 DNN 架构
IF 3.7 2区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-10-30 DOI: 10.1109/JETCAS.2024.3488597
Dong Hyub Kim;Jonah O’Brien Weiss;Sandip Kundu
Deep Neural Networks (DNNs) have become invaluable intellectual property for AI providers due to advancements fueled by a decade of research and development. However, recent studies have demonstrated the effectiveness of model extraction attacks, which threaten this value by stealing DNN models. These attacks can lead to misuse of personal data, safety risks in critical systems, and the spread of misinformation. This paper explores model extraction attacks on DNN models deployed on mobile devices, using runtime profiles as a side-channel. Since mobile devices are resource constrained, DNN deployments require optimization efforts to reduce latency. The main hurdle in extracting DNN architectures in this scenario is that optimization techniques, such as operator-level and graph-level fusion, can obfuscate the association between runtime profile operators and their corresponding DNN layers, posing challenges for adversaries to accurately predict the computation performed. To overcome this, we propose a novel method analyzing GPU call profiles to identify the original DNN architecture. Our approach achieves full accuracy in extracting DNN architectures from a predefined set, even when layer information is obscured. For unseen architectures, a layer-by-layer hyperparameter extraction method guided by sub-layer patterns is introduced, also achieving high accuracy. This research achieves two firsts: 1) targeting mobile GPUs for DNN architecture extraction and 2) successfully extracting architectures from optimized models with fused layers.
经过十年的研发,深度神经网络(DNN)已经成为人工智能供应商的宝贵知识产权。然而,最近的研究证明了模型提取攻击的有效性,这些攻击通过窃取 DNN 模型威胁到了这一价值。这些攻击可能导致个人数据的滥用、关键系统的安全风险以及错误信息的传播。本文利用运行时配置文件作为侧通道,探讨了对部署在移动设备上的 DNN 模型的模型提取攻击。由于移动设备资源有限,DNN 部署需要进行优化以减少延迟。在这种情况下,提取 DNN 架构的主要障碍是运算符级和图级融合等优化技术会混淆运行时配置文件运算符与其相应 DNN 层之间的关联,从而给对手准确预测所执行的计算带来挑战。为了克服这一问题,我们提出了一种新方法,通过分析 GPU 调用配置文件来识别原始 DNN 架构。我们的方法能从预定义的集合中完全准确地提取 DNN 架构,即使层信息被掩盖也不例外。对于不可见的架构,我们引入了一种由子层模式引导的逐层超参数提取方法,同样达到了很高的准确率。这项研究开创了两个先河:1)针对移动 GPU 进行 DNN 架构提取;2)成功地从具有融合层的优化模型中提取架构。
{"title":"Extracting DNN Architectures via Runtime Profiling on Mobile GPUs","authors":"Dong Hyub Kim;Jonah O’Brien Weiss;Sandip Kundu","doi":"10.1109/JETCAS.2024.3488597","DOIUrl":"https://doi.org/10.1109/JETCAS.2024.3488597","url":null,"abstract":"Deep Neural Networks (DNNs) have become invaluable intellectual property for AI providers due to advancements fueled by a decade of research and development. However, recent studies have demonstrated the effectiveness of model extraction attacks, which threaten this value by stealing DNN models. These attacks can lead to misuse of personal data, safety risks in critical systems, and the spread of misinformation. This paper explores model extraction attacks on DNN models deployed on mobile devices, using runtime profiles as a side-channel. Since mobile devices are resource constrained, DNN deployments require optimization efforts to reduce latency. The main hurdle in extracting DNN architectures in this scenario is that optimization techniques, such as operator-level and graph-level fusion, can obfuscate the association between runtime profile operators and their corresponding DNN layers, posing challenges for adversaries to accurately predict the computation performed. To overcome this, we propose a novel method analyzing GPU call profiles to identify the original DNN architecture. Our approach achieves full accuracy in extracting DNN architectures from a predefined set, even when layer information is obscured. For unseen architectures, a layer-by-layer hyperparameter extraction method guided by sub-layer patterns is introduced, also achieving high accuracy. This research achieves two firsts: 1) targeting mobile GPUs for DNN architecture extraction and 2) successfully extracting architectures from optimized models with fused layers.","PeriodicalId":48827,"journal":{"name":"IEEE Journal on Emerging and Selected Topics in Circuits and Systems","volume":"14 4","pages":"620-633"},"PeriodicalIF":3.7,"publicationDate":"2024-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142821286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On Function-Coupled Watermarks for Deep Neural Networks 深度神经网络的函数耦合水印研究
IF 3.7 2区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-10-30 DOI: 10.1109/JETCAS.2024.3476386
Xiangyu Wen;Yu Li;Wei Jiang;Qiang Xu
Well-performed deep neural networks (DNNs) generally require massive labeled data and computational resources for training. Various watermarking techniques are proposed to protect such intellectual properties (IPs), wherein the DNN providers can claim IP ownership by retrieving their embedded watermarks. While promising results are reported in the literature, existing solutions suffer from watermark removal attacks, such as model fine-tuning, model pruning, and model extraction. In this paper, we propose a novel DNN watermarking solution that can effectively defend against the above attacks. Our key insight is to enhance the coupling of the watermark and model functionalities such that removing the watermark would inevitably degrade the model’s performance on normal inputs. Specifically, on one hand, we sample inputs from the original training dataset and fuse them as watermark images. On the other hand, we randomly mask model weights during training to distribute the watermark information in the network. Our method can successfully defend against common watermark removal attacks, watermark ambiguity attacks, and existing widely used backdoor detection methods, outperforming existing solutions as demonstrated by evaluation results on various benchmarks. Our code is available at: https://github.com/cure-lab/Function-Coupled-Watermark.
性能良好的深度神经网络(DNN)通常需要大量标注数据和计算资源进行训练。为了保护这些知识产权(IP),人们提出了各种水印技术,DNN 提供商可以通过检索其嵌入的水印来主张 IP 所有权。虽然文献报道的结果很有希望,但现有的解决方案都受到水印去除攻击,如模型微调、模型剪枝和模型提取。在本文中,我们提出了一种新型 DNN 水印解决方案,可有效抵御上述攻击。我们的主要见解是加强水印和模型功能的耦合,这样去除水印就会不可避免地降低模型在正常输入上的性能。具体来说,一方面,我们从原始训练数据集中抽取输入样本,并将其融合为水印图像。另一方面,我们在训练过程中随机屏蔽模型权重,以便在网络中分布水印信息。我们的方法可以成功抵御常见的水印去除攻击、水印模糊攻击和现有的广泛使用的后门检测方法,在各种基准上的评估结果表明,我们的方法优于现有的解决方案。我们的代码可在以下网址获取:https://github.com/cure-lab/Function-Coupled-Watermark。
{"title":"On Function-Coupled Watermarks for Deep Neural Networks","authors":"Xiangyu Wen;Yu Li;Wei Jiang;Qiang Xu","doi":"10.1109/JETCAS.2024.3476386","DOIUrl":"https://doi.org/10.1109/JETCAS.2024.3476386","url":null,"abstract":"Well-performed deep neural networks (DNNs) generally require massive labeled data and computational resources for training. Various watermarking techniques are proposed to protect such intellectual properties (IPs), wherein the DNN providers can claim IP ownership by retrieving their embedded watermarks. While promising results are reported in the literature, existing solutions suffer from watermark removal attacks, such as model fine-tuning, model pruning, and model extraction. In this paper, we propose a novel DNN watermarking solution that can effectively defend against the above attacks. Our key insight is to enhance the coupling of the watermark and model functionalities such that removing the watermark would inevitably degrade the model’s performance on normal inputs. Specifically, on one hand, we sample inputs from the original training dataset and fuse them as watermark images. On the other hand, we randomly mask model weights during training to distribute the watermark information in the network. Our method can successfully defend against common watermark removal attacks, watermark ambiguity attacks, and existing widely used backdoor detection methods, outperforming existing solutions as demonstrated by evaluation results on various benchmarks. Our code is available at: \u0000<uri>https://github.com/cure-lab/Function-Coupled-Watermark</uri>\u0000.","PeriodicalId":48827,"journal":{"name":"IEEE Journal on Emerging and Selected Topics in Circuits and Systems","volume":"14 4","pages":"608-619"},"PeriodicalIF":3.7,"publicationDate":"2024-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10738841","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142821233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Model Agnostic Contrastive Explanations for Classification Models 分类模型与模型无关的对比解释
IF 3.7 2区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-10-24 DOI: 10.1109/JETCAS.2024.3486114
Amit Dhurandhar;Tejaswini Pedapati;Avinash Balakrishnan;Pin-Yu Chen;Karthikeyan Shanmugam;Ruchir Puri
Extensive surveys on explanations that are suitable for humans, claims that an explanation being contrastive is one of its most important traits. A few methods have been proposed to generate contrastive explanations for differentiable models such as deep neural networks, where one has complete access to the model. In this work, we propose a method, Model Agnostic Contrastive Explanations Method (MACEM), that can generate contrastive explanations for any classification model where one is able to only query the class probabilities for a desired input. This allows us to generate contrastive explanations for not only neural networks, but also models such as random forests, boosted trees and even arbitrary ensembles that are still amongst the state-of-the-art when learning on tabular data. Our method is also applicable to the scenarios where only the black-box access of the model is provided, implying that we can only obtain the predictions and prediction probabilities. With the advent of larger models, it is increasingly prevalent to be working in the black-box scenario, where the user will not necessarily have access to the model weights or parameters, and will only be able to interact with the model using an API. As such, to obtain meaningful explanations we propose a principled and scalable approach to handle real and categorical features leading to novel formulations for computing pertinent positives and negatives that form the essence of a contrastive explanation. A detailed treatment of this nature where we focus on scalability and handle different data types was not performed in the previous work, which assumed all features to be positive real valued with zero being indicative of the least interesting value. We part with this strong implicit assumption and generalize these methods so as to be applicable across a much wider range of problem settings. We quantitatively as well as qualitatively validate our approach over public datasets covering diverse domains.
关于适合人类的解释的大量调查表明,解释的对比性是其最重要的特征之一。目前已经提出了几种方法来为可微分模型(如深度神经网络)生成对比性解释,在这种情况下,人们可以完全访问模型。在这项工作中,我们提出了一种名为 "模型不可知性对比解释法"(MACEM)的方法,它可以为任何分类模型生成对比解释,在这种模型中,人们只能查询所需输入的类概率。这使我们不仅能为神经网络生成对比性解释,还能为随机森林、提升树等模型生成对比性解释,甚至还能为任意集合生成对比性解释。我们的方法也适用于只提供模型黑箱访问的情况,这意味着我们只能获得预测结果和预测概率。随着大型模型的出现,在黑箱场景下工作的情况越来越普遍,在这种情况下,用户不一定能访问模型权重或参数,只能通过 API 与模型进行交互。因此,为了获得有意义的解释,我们提出了一种原则性的、可扩展的方法来处理真实的和分类的特征,从而得出新的公式来计算相关的正面和负面特征,这些特征构成了对比解释的本质。前人的研究假设所有特征都是正实值,零表示最不感兴趣的值。我们摒弃了这一强烈的隐含假设,将这些方法加以推广,使其适用于更广泛的问题设置。我们在涵盖不同领域的公共数据集上对我们的方法进行了定量和定性验证。
{"title":"Model Agnostic Contrastive Explanations for Classification Models","authors":"Amit Dhurandhar;Tejaswini Pedapati;Avinash Balakrishnan;Pin-Yu Chen;Karthikeyan Shanmugam;Ruchir Puri","doi":"10.1109/JETCAS.2024.3486114","DOIUrl":"https://doi.org/10.1109/JETCAS.2024.3486114","url":null,"abstract":"Extensive surveys on explanations that are suitable for humans, claims that an explanation being contrastive is one of its most important traits. A few methods have been proposed to generate contrastive explanations for differentiable models such as deep neural networks, where one has complete access to the model. In this work, we propose a method, Model Agnostic Contrastive Explanations Method (MACEM), that can generate contrastive explanations for any classification model where one is able to only query the class probabilities for a desired input. This allows us to generate contrastive explanations for not only neural networks, but also models such as random forests, boosted trees and even arbitrary ensembles that are still amongst the state-of-the-art when learning on tabular data. Our method is also applicable to the scenarios where only the black-box access of the model is provided, implying that we can only obtain the predictions and prediction probabilities. With the advent of larger models, it is increasingly prevalent to be working in the black-box scenario, where the user will not necessarily have access to the model weights or parameters, and will only be able to interact with the model using an API. As such, to obtain meaningful explanations we propose a principled and scalable approach to handle real and categorical features leading to novel formulations for computing pertinent positives and negatives that form the essence of a contrastive explanation. A detailed treatment of this nature where we focus on scalability and handle different data types was not performed in the previous work, which assumed all features to be positive real valued with zero being indicative of the least interesting value. We part with this strong implicit assumption and generalize these methods so as to be applicable across a much wider range of problem settings. We quantitatively as well as qualitatively validate our approach over public datasets covering diverse domains.","PeriodicalId":48827,"journal":{"name":"IEEE Journal on Emerging and Selected Topics in Circuits and Systems","volume":"14 4","pages":"789-798"},"PeriodicalIF":3.7,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142821287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Stealing the Invisible: Unveiling Pre-Trained CNN Models Through Adversarial Examples and Timing Side-Channels 窃取隐形:通过对抗性示例和定时侧信道揭开预训练 CNN 模型的神秘面纱
IF 3.7 2区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-10-23 DOI: 10.1109/JETCAS.2024.3485133
Shubhi Shukla;Manaar Alam;Pabitra Mitra;Debdeep Mukhopadhyay
Machine learning, with its myriad applications, has become an integral component of numerous AI systems. A common practice in this domain is the use of transfer learning, where a pre-trained model’s architecture, readily available to the public, is fine-tuned to suit specific tasks. As Machine Learning as a Service (MLaaS) platforms increasingly use pre-trained models in their backends, it is crucial to safeguard these architectures and understand their vulnerabilities. In this work, we present ArchWhisperer, a model fingerprinting attack approach based on the novel observation that the classification patterns of adversarial images can be used as a means to steal the models. Furthermore, the adversarial image classifications in conjunction with model inference times is used to further enhance our attack in terms of attack effectiveness as well as query budget. ArchWhisperer is designed for typical user-level access in remote MLaaS environments and it exploits varying misclassifications of adversarial images across different models to fingerprint several renowned Convolutional Neural Network (CNN) and Vision Transformer (ViT) architectures. We utilize the profiling of remote model inference times to reduce the necessary adversarial images, subsequently decreasing the number of queries required. We have presented our results over 27 pre-trained models of different CNN and ViT architectures using CIFAR-10 dataset and demonstrate a high accuracy of 88.8% while keeping the query budget under 20. This is a marked improvement compared to state-of-the-art works.
机器学习应用广泛,已成为众多人工智能系统不可或缺的组成部分。该领域的一种常见做法是使用迁移学习,即对公众可随时获得的预训练模型架构进行微调,以适应特定任务。随着机器学习即服务(MLaaS)平台越来越多地在其后端使用预先训练好的模型,保护这些架构并了解其漏洞至关重要。在这项工作中,我们提出了一种模型指纹攻击方法 ArchWhisperer,这种方法基于一种新颖的观点,即敌对图像的分类模式可被用作窃取模型的手段。此外,对抗图像分类与模型推理时间相结合,可在攻击效果和查询预算方面进一步增强我们的攻击。ArchWhisperer 专为远程 MLaaS 环境中典型的用户级访问而设计,它利用不同模型中对抗图像的不同错误分类,对几种著名的卷积神经网络(CNN)和视觉转换器(ViT)架构进行指纹识别。我们利用对远程模型推理时间的分析来减少所需的对抗图像,从而减少所需的查询次数。我们利用 CIFAR-10 数据集对不同 CNN 和 ViT 架构的 27 个预训练模型进行了分析,结果表明,在将查询预算控制在 20 次以内的同时,准确率高达 88.8%。与最先进的作品相比,这是一个明显的进步。
{"title":"Stealing the Invisible: Unveiling Pre-Trained CNN Models Through Adversarial Examples and Timing Side-Channels","authors":"Shubhi Shukla;Manaar Alam;Pabitra Mitra;Debdeep Mukhopadhyay","doi":"10.1109/JETCAS.2024.3485133","DOIUrl":"https://doi.org/10.1109/JETCAS.2024.3485133","url":null,"abstract":"Machine learning, with its myriad applications, has become an integral component of numerous AI systems. A common practice in this domain is the use of transfer learning, where a pre-trained model’s architecture, readily available to the public, is fine-tuned to suit specific tasks. As Machine Learning as a Service (MLaaS) platforms increasingly use pre-trained models in their backends, it is crucial to safeguard these architectures and understand their vulnerabilities. In this work, we present \u0000<monospace>ArchWhisperer</monospace>\u0000, a model fingerprinting attack approach based on the novel observation that the classification patterns of adversarial images can be used as a means to steal the models. Furthermore, the adversarial image classifications in conjunction with model inference times is used to further enhance our attack in terms of attack effectiveness as well as query budget. \u0000<monospace>ArchWhisperer</monospace>\u0000 is designed for typical user-level access in remote MLaaS environments and it exploits varying misclassifications of adversarial images across different models to fingerprint several renowned Convolutional Neural Network (CNN) and Vision Transformer (ViT) architectures. We utilize the profiling of remote model inference times to reduce the necessary adversarial images, subsequently decreasing the number of queries required. We have presented our results over 27 pre-trained models of different CNN and ViT architectures using CIFAR-10 dataset and demonstrate a high accuracy of 88.8% while keeping the query budget under 20. This is a marked improvement compared to state-of-the-art works.","PeriodicalId":48827,"journal":{"name":"IEEE Journal on Emerging and Selected Topics in Circuits and Systems","volume":"14 4","pages":"634-646"},"PeriodicalIF":3.7,"publicationDate":"2024-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142821169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RLFL: A Reinforcement Learning Aggregation Approach for Hybrid Federated Learning Systems Using Full and Ternary Precision 基于全精度和三元精度的混合联邦学习系统的强化学习聚合方法
IF 3.7 2区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-10-18 DOI: 10.1109/JETCAS.2024.3483554
HamidReza Imani;Jeff Anderson;Samuel Farid;Abdolah Amirany;Tarek El-Ghazawi
Federated Learning (FL) has emerged as an approach to provide a privacy-preserving and communication-efficient Machine Learning (ML) framework in mobile-edge environments which are likely to be resource-constrained and heterogeneous. Therefore, the required precision level and performance from each of the devices may vary depending upon the circumstances, giving rise to designs containing mixed-precision and quantized models. Among the various quantization schemes, binary and ternary representations are significant since they enable arrangements that can strike effective balances between performance and precision. In this paper, we propose RLFL, a hybrid ternary/full-precision FL system along with a Reinforcement Learning (RL) aggregation method with the goal of improved performance comparing to a homogeneous ternary environment. This system consists a mix of clients with full-precision and resource-constrained clients with ternary ML models. However, aggregating models with ternary and full-precision weights using traditional aggregation approaches present a challenge due to the disparity in weight magnitudes. In order to obtain an improved accuracy, we use a deep RL model to explore and optimize the amount of contribution assigned to each client’s model for aggregation in each iteration. We evaluate and compare accuracy and communication overhead of the proposed approach against the prior work for the classification of MNIST, FMNIST, and CIFAR10 datasets. Evaluation results show that the proposed RLFL system, along with its aggregation technique, outperforms the existing FL approaches in accuracy ranging from 5% to 19% while imposing negligible computation overhead.
联邦学习(FL)是一种在移动边缘环境中提供隐私保护和通信效率高的机器学习(ML)框架的方法,这种环境可能是资源受限和异构的。因此,每个设备所需的精度水平和性能可能因情况而异,这就产生了包含混合精度和量化模型的设计。在各种量化方案中,二元和三元表示法具有重要意义,因为它们可以在性能和精度之间实现有效平衡。在本文中,我们提出了一种三元/全精度混合 FL 系统 RLFL 以及一种强化学习(RL)聚合方法,目的是与同质三元环境相比提高性能。该系统由使用全精度模型的客户端和使用三元 ML 模型的资源受限客户端混合组成。然而,由于权重大小的差异,使用传统聚合方法聚合具有三元和全精度权重的模型是一项挑战。为了提高准确性,我们使用深度 RL 模型来探索和优化分配给每个客户端模型的贡献量,以便在每次迭代中进行聚合。我们对 MNIST、FMNIST 和 CIFAR10 数据集的分类进行了评估,并将所提方法的准确率和通信开销与之前的工作进行了比较。评估结果表明,所提出的 RLFL 系统及其聚合技术在准确率方面优于现有的 FL 方法,准确率在 5% 到 19% 之间,而计算开销却微乎其微。
{"title":"RLFL: A Reinforcement Learning Aggregation Approach for Hybrid Federated Learning Systems Using Full and Ternary Precision","authors":"HamidReza Imani;Jeff Anderson;Samuel Farid;Abdolah Amirany;Tarek El-Ghazawi","doi":"10.1109/JETCAS.2024.3483554","DOIUrl":"https://doi.org/10.1109/JETCAS.2024.3483554","url":null,"abstract":"Federated Learning (FL) has emerged as an approach to provide a privacy-preserving and communication-efficient Machine Learning (ML) framework in mobile-edge environments which are likely to be resource-constrained and heterogeneous. Therefore, the required precision level and performance from each of the devices may vary depending upon the circumstances, giving rise to designs containing mixed-precision and quantized models. Among the various quantization schemes, binary and ternary representations are significant since they enable arrangements that can strike effective balances between performance and precision. In this paper, we propose RLFL, a hybrid ternary/full-precision FL system along with a Reinforcement Learning (RL) aggregation method with the goal of improved performance comparing to a homogeneous ternary environment. This system consists a mix of clients with full-precision and resource-constrained clients with ternary ML models. However, aggregating models with ternary and full-precision weights using traditional aggregation approaches present a challenge due to the disparity in weight magnitudes. In order to obtain an improved accuracy, we use a deep RL model to explore and optimize the amount of contribution assigned to each client’s model for aggregation in each iteration. We evaluate and compare accuracy and communication overhead of the proposed approach against the prior work for the classification of MNIST, FMNIST, and CIFAR10 datasets. Evaluation results show that the proposed RLFL system, along with its aggregation technique, outperforms the existing FL approaches in accuracy ranging from 5% to 19% while imposing negligible computation overhead.","PeriodicalId":48827,"journal":{"name":"IEEE Journal on Emerging and Selected Topics in Circuits and Systems","volume":"14 4","pages":"673-687"},"PeriodicalIF":3.7,"publicationDate":"2024-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142821201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Reinforcement Learning-Based ELF Adversarial Malicious Sample Generation Method 基于强化学习的 ELF 恶意样本生成方法
IF 3.7 2区 工程技术 Q2 ENGINEERING, ELECTRICAL & ELECTRONIC Pub Date : 2024-10-15 DOI: 10.1109/JETCAS.2024.3481273
Mingfu Xue;Jinlong Fu;Zhiyuan Li;Shifeng Ni;Heyi Wu;Leo Yu Zhang;Yushu Zhang;Weiqiang Liu
In recent years, domestic Linux operating systems have developed rapidly, but the threat of ELF viruses has become increasingly prominent. Currently, domestic antivirus software for information technology application innovation (ITAI) operating systems shows insufficient capability in detecting ELF viruses. At the same time, research on generating malicious samples in ELF format is scarce. In order to fill this gap at home and abroad and meet the growing application needs of domestic antivirus software companies, this paper proposes an automatic ELF adversarial malicious samples generation technique based on reinforcement learning. Based on reinforcement learning framework, after being processed by cycles of feature extraction, malicious detection, agent decision-making, and evade-detection operation, the sample can evade the detection of antivirus engines. Specifically, nine feature extractor subclasses are used to extract features in multiple aspects. The PPO algorithm is used as the agent algorithm. The action table in the evade-detection module contains 11 evade-detection operations for ELF malicious samples. This method is experimentally verified on the ITAI operating system, and the ELF malicious sample set on the Linux x86 platform is used as the original sample set. The detection rate of this sample set by ClamAV before processing is 98%, and the detection rate drops to 25% after processing. The detection rate of this sample set by 360 Security before processing is 4%, and the detection rate drops to 1% after processing. Furthermore, after processing, the average number of engines on VirusTotal that could detect the maliciousness of the samples decreases from 39 to 15. Many malicious samples were detected by $41sim 43$ engines on VirusTotal before processing, while after the evade-detection processing, only $8sim 9$ engines on VirusTotal can detect the malware. In terms of executability and malicious function consistency, the processed samples can still run normally and the malicious functions remain consistent with those before processing. Overall, the proposed method in this paper can effectively generate adversarial ELF malware samples. Using this method to generate malicious samples to test and train the anti-virus software can promote and improve anti-virus software’s detection and defense capability against malware.
近年来,国产Linux操作系统发展迅速,但ELF病毒的威胁也日益突出。目前国内ITAI (information technology application innovation)操作系统杀毒软件对ELF病毒的检测能力不足。同时,对ELF格式恶意样本的生成研究较少。为了填补国内外这一空白,满足国内杀毒软件公司日益增长的应用需求,本文提出了一种基于强化学习的ELF对抗性恶意样本自动生成技术。基于强化学习框架,经过特征提取、恶意检测、代理决策、规避检测等循环处理后,样本可以规避反病毒引擎的检测。具体来说,使用9个特征提取器子类来提取多个方面的特征。agent算法采用PPO算法。规避检测模块中的动作表包含了11个针对ELF恶意样本的规避检测操作。该方法在ITAI操作系统上进行了实验验证,并以Linux x86平台上的ELF恶意样本集作为原始样本集。ClamAV在处理前对该样品设定的检出率为98%,处理后检出率降至25%。360 Security在处理前设置的该样品的检出率为4%,处理后检出率降至1%。此外,经过处理后,VirusTotal上能够检测到恶意样本的平均引擎数量从39个减少到15个。许多恶意样本在处理前被VirusTotal上$41sim $ 43$的引擎检测到,而在逃避检测处理后,只有VirusTotal上$8sim $ 9$的引擎可以检测到恶意软件。在可执行性和恶意函数一致性方面,处理后的样本仍能正常运行,恶意函数与处理前的样本保持一致。总的来说,本文提出的方法可以有效地生成对抗性ELF恶意软件样本。利用该方法生成恶意样本对杀毒软件进行测试和训练,可以提升和提高杀毒软件对恶意软件的检测和防御能力。
{"title":"A Reinforcement Learning-Based ELF Adversarial Malicious Sample Generation Method","authors":"Mingfu Xue;Jinlong Fu;Zhiyuan Li;Shifeng Ni;Heyi Wu;Leo Yu Zhang;Yushu Zhang;Weiqiang Liu","doi":"10.1109/JETCAS.2024.3481273","DOIUrl":"https://doi.org/10.1109/JETCAS.2024.3481273","url":null,"abstract":"In recent years, domestic Linux operating systems have developed rapidly, but the threat of ELF viruses has become increasingly prominent. Currently, domestic antivirus software for information technology application innovation (ITAI) operating systems shows insufficient capability in detecting ELF viruses. At the same time, research on generating malicious samples in ELF format is scarce. In order to fill this gap at home and abroad and meet the growing application needs of domestic antivirus software companies, this paper proposes an automatic ELF adversarial malicious samples generation technique based on reinforcement learning. Based on reinforcement learning framework, after being processed by cycles of feature extraction, malicious detection, agent decision-making, and evade-detection operation, the sample can evade the detection of antivirus engines. Specifically, nine feature extractor subclasses are used to extract features in multiple aspects. The PPO algorithm is used as the agent algorithm. The action table in the evade-detection module contains 11 evade-detection operations for ELF malicious samples. This method is experimentally verified on the ITAI operating system, and the ELF malicious sample set on the Linux x86 platform is used as the original sample set. The detection rate of this sample set by ClamAV before processing is 98%, and the detection rate drops to 25% after processing. The detection rate of this sample set by 360 Security before processing is 4%, and the detection rate drops to 1% after processing. Furthermore, after processing, the average number of engines on VirusTotal that could detect the maliciousness of the samples decreases from 39 to 15. Many malicious samples were detected by \u0000<inline-formula> <tex-math>$41sim 43$ </tex-math></inline-formula>\u0000 engines on VirusTotal before processing, while after the evade-detection processing, only \u0000<inline-formula> <tex-math>$8sim 9$ </tex-math></inline-formula>\u0000 engines on VirusTotal can detect the malware. In terms of executability and malicious function consistency, the processed samples can still run normally and the malicious functions remain consistent with those before processing. Overall, the proposed method in this paper can effectively generate adversarial ELF malware samples. Using this method to generate malicious samples to test and train the anti-virus software can promote and improve anti-virus software’s detection and defense capability against malware.","PeriodicalId":48827,"journal":{"name":"IEEE Journal on Emerging and Selected Topics in Circuits and Systems","volume":"14 4","pages":"743-757"},"PeriodicalIF":3.7,"publicationDate":"2024-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142821272","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE Journal on Emerging and Selected Topics in Circuits and Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1