首页 > 最新文献

IEEE Transactions on Dependable and Secure Computing最新文献

英文 中文
Window Canaries: Re-thinking Stack Canaries for Architectures with Register Windows 窗口金丝雀:重新思考带有注册窗口的体系结构中的堆栈金丝雀
IF 7.3 2区 计算机科学 Q1 Computer Science Pub Date : 2023-11-01 DOI: 10.1109/tdsc.2022.3230748
Kai Lehniger, P. Langendorfer
This paper presents Window Canaries, a novel approach to Stack Canaries for architectures with a register window that protects return addresses and stack pointers without the need of adding additional instruction to each potentially vulnerable function. Instead, placement and check of the canary word is moved to window exception handlers that are responsible to handle register window overflows and underflows. The approach offers low performance overhead while guaranteeing that return addresses are protected by stack buffer overflows without relying on a heuristic that decides which functions to instrument. The contributions of this paper are a complete implementation of the approach for the Xtensa LX architecture with register window option as well as a performance evaluation and discussion of advantages and drawbacks.
{"title":"Window Canaries: Re-thinking Stack Canaries for Architectures with Register Windows","authors":"Kai Lehniger, P. Langendorfer","doi":"10.1109/tdsc.2022.3230748","DOIUrl":"https://doi.org/10.1109/tdsc.2022.3230748","url":null,"abstract":"This paper presents Window Canaries, a novel approach to Stack Canaries for architectures with a register window that protects return addresses and stack pointers without the need of adding additional instruction to each potentially vulnerable function. Instead, placement and check of the canary word is moved to window exception handlers that are responsible to handle register window overflows and underflows. The approach offers low performance overhead while guaranteeing that return addresses are protected by stack buffer overflows without relying on a heuristic that decides which functions to instrument. The contributions of this paper are a complete implementation of the approach for the Xtensa LX architecture with register window option as well as a performance evaluation and discussion of advantages and drawbacks.","PeriodicalId":13047,"journal":{"name":"IEEE Transactions on Dependable and Secure Computing","volume":null,"pages":null},"PeriodicalIF":7.3,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62407602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Robust Blind Video Watermarking Against Geometric Deformations and Online Video Sharing Platform Processing 抗几何变形的鲁棒视频盲水印及在线视频共享平台处理
IF 7.3 2区 计算机科学 Q1 Computer Science Pub Date : 2023-11-01 DOI: 10.1109/tdsc.2022.3232484
Mingze He, Hongxia Wang, Fei Zhang, S. Abdullahi, Ling Yang
In recent years, online video sharing platforms have been widely available on social networks. To protect copyright and track the origins of these shared videos, some video watermarking methods have been proposed. However, their robustness performance is significantly degraded under geometric deformations, which destroy the synchronization between the watermark embedding and extraction. To this end, we propose a novel robust blind video watermarking scheme by embedding the watermark into low-order recursive Zernike moments. To reduce the time complexity, we give an efficient computation method by exploring the characteristics of video and moments. The moment accuracy is greatly improved due to the introduction of a recursive computation method. Furthermore, we design an optimization strategy to enhance visual quality and reduce distortion drift of watermarked videos by analyzing the radial basis function. The robustness of the proposed scheme is verified by different attacks, including geometric deformations, length-width ratio changes, temporal synchronization attacks, and combined attacks. In practical applications, the proposed scheme effectively resists processing from video sharing platforms and screenshots taken with smartphones and PC monitors. The watermark is extracted without the host video. Experimental results show that our proposed scheme outperforms other state-of-the-art schemes in terms of imperceptibility and robustness.
{"title":"Robust Blind Video Watermarking Against Geometric Deformations and Online Video Sharing Platform Processing","authors":"Mingze He, Hongxia Wang, Fei Zhang, S. Abdullahi, Ling Yang","doi":"10.1109/tdsc.2022.3232484","DOIUrl":"https://doi.org/10.1109/tdsc.2022.3232484","url":null,"abstract":"In recent years, online video sharing platforms have been widely available on social networks. To protect copyright and track the origins of these shared videos, some video watermarking methods have been proposed. However, their robustness performance is significantly degraded under geometric deformations, which destroy the synchronization between the watermark embedding and extraction. To this end, we propose a novel robust blind video watermarking scheme by embedding the watermark into low-order recursive Zernike moments. To reduce the time complexity, we give an efficient computation method by exploring the characteristics of video and moments. The moment accuracy is greatly improved due to the introduction of a recursive computation method. Furthermore, we design an optimization strategy to enhance visual quality and reduce distortion drift of watermarked videos by analyzing the radial basis function. The robustness of the proposed scheme is verified by different attacks, including geometric deformations, length-width ratio changes, temporal synchronization attacks, and combined attacks. In practical applications, the proposed scheme effectively resists processing from video sharing platforms and screenshots taken with smartphones and PC monitors. The watermark is extracted without the host video. Experimental results show that our proposed scheme outperforms other state-of-the-art schemes in terms of imperceptibility and robustness.","PeriodicalId":13047,"journal":{"name":"IEEE Transactions on Dependable and Secure Computing","volume":null,"pages":null},"PeriodicalIF":7.3,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62407672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
ACA: Anonymous, Confidential and Auditable Transaction Systems for Blockchain ACA:区块链的匿名、保密和可审计交易系统
IF 7.3 2区 计算机科学 Q1 Computer Science Pub Date : 2023-11-01 DOI: 10.1109/tdsc.2022.3228236
Chao Lin, Xinyi Huang, Jianting Ning, D. He
The rapid development and wide application of blockchain not only highlight the significance of privacy protection (including anonymity and confidentiality) but also the necessity of auditability. While several ingenious schemes such as MiniLedger and traceable Monero supporting both privacy protection and auditability have been proposed, they either provide incomplete privacy protection (only achieving anonymity within a small set or only providing confidentiality but not anonymity), or involve additional auditing conditions such as reaching threshold transaction volume or requiring permissioned nodes to serve as the manager, or restrict to specific blockchain types such as Monero. To mitigate these issues, this article proposes a generic anonymous, confidential, and auditable transaction system (named ACA), which is compatible with both UTXO-based permissionless and permissioned blockchains. Core technologies of ACA include designed traceable anonymous key generation and publicly verifiable authorization mechanisms from existing cryptographic tools (i.e., public key encryption, partially homomorphic encryption, and accumulator) as well as the meticulous designed signatures of knowledge and smart contract. To demonstrate the entity of our proposal, we first prove its security including authenticity, anonymity, confidentiality and soundness, and then provide an instantiation to evaluate its performance. The final implementation and benchmarks show that our proposal can still gain performance advantage even adding more functionalities.
{"title":"ACA: Anonymous, Confidential and Auditable Transaction Systems for Blockchain","authors":"Chao Lin, Xinyi Huang, Jianting Ning, D. He","doi":"10.1109/tdsc.2022.3228236","DOIUrl":"https://doi.org/10.1109/tdsc.2022.3228236","url":null,"abstract":"The rapid development and wide application of blockchain not only highlight the significance of privacy protection (including anonymity and confidentiality) but also the necessity of auditability. While several ingenious schemes such as MiniLedger and traceable Monero supporting both privacy protection and auditability have been proposed, they either provide incomplete privacy protection (only achieving anonymity within a small set or only providing confidentiality but not anonymity), or involve additional auditing conditions such as reaching threshold transaction volume or requiring permissioned nodes to serve as the manager, or restrict to specific blockchain types such as Monero. To mitigate these issues, this article proposes a generic anonymous, confidential, and auditable transaction system (named ACA), which is compatible with both UTXO-based permissionless and permissioned blockchains. Core technologies of ACA include designed traceable anonymous key generation and publicly verifiable authorization mechanisms from existing cryptographic tools (i.e., public key encryption, partially homomorphic encryption, and accumulator) as well as the meticulous designed signatures of knowledge and smart contract. To demonstrate the entity of our proposal, we first prove its security including authenticity, anonymity, confidentiality and soundness, and then provide an instantiation to evaluate its performance. The final implementation and benchmarks show that our proposal can still gain performance advantage even adding more functionalities.","PeriodicalId":13047,"journal":{"name":"IEEE Transactions on Dependable and Secure Computing","volume":null,"pages":null},"PeriodicalIF":7.3,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62406683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CGIR: Conditional Generative Instance Reconstruction Attacks against Federated Learning 针对联邦学习的条件生成实例重构攻击
IF 7.3 2区 计算机科学 Q1 Computer Science Pub Date : 2023-11-01 DOI: 10.1109/tdsc.2022.3228302
Xiangrui Xu, Peng Liu, Wei Wang, Hongliang Ma, Bin Wang, Zhen Han, Yufei Han
Data reconstruction attack has become an emerging privacy threat to Federal Learning (FL), inspiring a rethinking of FL's ability to protect privacy. While existing data reconstruction attacks have shown some effective performance, prior arts rely on different strong assumptions to guide the reconstruction process. In this work, we propose a novel Conditional Generative Instance Reconstruction Attack (CGIR attack) that drops all these assumptions. Specifically, we propose a batch label inference attack in non-IID FL scenarios, where multiple images can share the same labels. Based on the inferred labels, we conduct a “coarse-to-fine” image reconstruction process that provides a stable and effective data reconstruction. In addition, we equip the generator with a label condition restriction so that the contents and the labels of the reconstructed images are consistent. Our extensive evaluation results on two model architectures and five image datasets show that without the auxiliary assumptions, the CGIR attack outperforms the prior arts, even for complex datasets, deep models, and large batch sizes. Furthermore, we evaluate several existing defense methods. The experimental results suggest that pruning gradients can be used as a strategy to mitigate privacy risks in FL if a model tolerates a slight accuracy loss.
{"title":"CGIR: Conditional Generative Instance Reconstruction Attacks against Federated Learning","authors":"Xiangrui Xu, Peng Liu, Wei Wang, Hongliang Ma, Bin Wang, Zhen Han, Yufei Han","doi":"10.1109/tdsc.2022.3228302","DOIUrl":"https://doi.org/10.1109/tdsc.2022.3228302","url":null,"abstract":"Data reconstruction attack has become an emerging privacy threat to Federal Learning (FL), inspiring a rethinking of FL's ability to protect privacy. While existing data reconstruction attacks have shown some effective performance, prior arts rely on different strong assumptions to guide the reconstruction process. In this work, we propose a novel Conditional Generative Instance Reconstruction Attack (CGIR attack) that drops all these assumptions. Specifically, we propose a batch label inference attack in non-IID FL scenarios, where multiple images can share the same labels. Based on the inferred labels, we conduct a “coarse-to-fine” image reconstruction process that provides a stable and effective data reconstruction. In addition, we equip the generator with a label condition restriction so that the contents and the labels of the reconstructed images are consistent. Our extensive evaluation results on two model architectures and five image datasets show that without the auxiliary assumptions, the CGIR attack outperforms the prior arts, even for complex datasets, deep models, and large batch sizes. Furthermore, we evaluate several existing defense methods. The experimental results suggest that pruning gradients can be used as a strategy to mitigate privacy risks in FL if a model tolerates a slight accuracy loss.","PeriodicalId":13047,"journal":{"name":"IEEE Transactions on Dependable and Secure Computing","volume":null,"pages":null},"PeriodicalIF":7.3,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62406842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Robustness-Assured White-Box Watermark in Neural Networks 神经网络中具有鲁棒性的白盒水印
IF 7.3 2区 计算机科学 Q1 Computer Science Pub Date : 2023-11-01 DOI: 10.1109/tdsc.2023.3242737
Peizhuo Lv, Pan Li, Shengzhi Zhang, Kai Chen, Ruigang Liang, Hualong Ma, Yue Zhao, Yingjiu Li
Recently, stealing highly-valuable and large-scale deep neural network (DNN) models becomes pervasive. The stolen models may be re-commercialized, e.g., deployed in embedded devices, released in model markets, utilized in competitions, etc, which infringes the Intellectual Property (IP) of the original owner. Detecting IP infringement of the stolen models is quite challenging, even with the white-box access to them in the above scenarios, since they may have experienced fine-tuning, pruning, functionality-equivalent adjustment to destruct any embedded watermark. Furthermore, the adversaries may also attempt to extract the embedded watermark or forge a similar watermark to falsely claim ownership. In this article, we propose a novel DNN watermarking solution, named $HufuNet$HufuNet, to detect IP infringement of DNN models against the above mentioned attacks. Furthermore, HufuNet is the first one theoretically proved to guarantee robustness against fine-tuning attacks. We evaluate HufuNet rigorously on four benchmark datasets with five popular DNN models, including convolutional neural network (CNN) and recurrent neural network (RNN). The experiments and analysis demonstrate that HufuNet is highly robust against model fine-tuning/pruning, transfer learning, kernels cutoff/supplement, functionality-equivalent attacks and fraudulent ownership claims, thus highly promising to protect large-scale DNN models in the real world.
{"title":"A Robustness-Assured White-Box Watermark in Neural Networks","authors":"Peizhuo Lv, Pan Li, Shengzhi Zhang, Kai Chen, Ruigang Liang, Hualong Ma, Yue Zhao, Yingjiu Li","doi":"10.1109/tdsc.2023.3242737","DOIUrl":"https://doi.org/10.1109/tdsc.2023.3242737","url":null,"abstract":"Recently, stealing highly-valuable and large-scale deep neural network (DNN) models becomes pervasive. The stolen models may be re-commercialized, e.g., deployed in embedded devices, released in model markets, utilized in competitions, etc, which infringes the Intellectual Property (IP) of the original owner. Detecting IP infringement of the stolen models is quite challenging, even with the white-box access to them in the above scenarios, since they may have experienced fine-tuning, pruning, functionality-equivalent adjustment to destruct any embedded watermark. Furthermore, the adversaries may also attempt to extract the embedded watermark or forge a similar watermark to falsely claim ownership. In this article, we propose a novel DNN watermarking solution, named <inline-formula><tex-math notation=\"LaTeX\">$HufuNet$</tex-math><alternatives><mml:math><mml:mrow><mml:mi>H</mml:mi><mml:mi>u</mml:mi><mml:mi>f</mml:mi><mml:mi>u</mml:mi><mml:mi>N</mml:mi><mml:mi>e</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:math><inline-graphic xlink:href=\"peizhuo-ieq1-3242737.gif\"/></alternatives></inline-formula>, to detect IP infringement of DNN models against the above mentioned attacks. Furthermore, HufuNet is the first one theoretically proved to guarantee robustness against fine-tuning attacks. We evaluate HufuNet rigorously on four benchmark datasets with five popular DNN models, including convolutional neural network (CNN) and recurrent neural network (RNN). The experiments and analysis demonstrate that HufuNet is highly robust against model fine-tuning/pruning, transfer learning, kernels cutoff/supplement, functionality-equivalent attacks and fraudulent ownership claims, thus highly promising to protect large-scale DNN models in the real world.","PeriodicalId":13047,"journal":{"name":"IEEE Transactions on Dependable and Secure Computing","volume":null,"pages":null},"PeriodicalIF":7.3,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62411190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Lightweight Privacy-preserving Distributed Recommender System using Tag-based Multikey Fully Homomorphic Data Encapsulation 基于标签的多密钥全同态数据封装轻量级隐私保护分布式推荐系统
IF 7.3 2区 计算机科学 Q1 Computer Science Pub Date : 2023-11-01 DOI: 10.1109/tdsc.2023.3243598
Jun Zhou, Guobin Gao, Zhenfu Cao, K. Choo, Xiaolei Dong
Recommender systems facilitate personalized service provision through the statistical analysis and model training of user historical data (e.g., browsing behavior, travel history, etc). To address the underpinning privacy implications associated with such systems, a number of privacy-preserving recommendation approaches have been presented. There are, however, limitations in many of these approaches. For example, approaches that apply public key (fully) homomorphic encryption (FHE) on different users. historical ratings under a unique public key of a target recommendation user incur significant computational overheads on resource-constrained local users and may not be scalable. On the other hand, approaches without utilizing public key FHE can neither resist chosen ciphertext attack (CCA), nor be straightforwardly applied to the setting of distributed servers. In this paper, a lightweight privacy-preserving distributed recommender system is proposed. Specifically, we present a new cryptographic primitive (i.e., tag-based multikey fully homomorphic data encapsulation mechanism; TMFH-DEM) designed to achieve CCA security for both input privacy and result privacy. TMFH-DEM enables a set of distributed servers to collaboratively execute efficient privacy-preserving outsourced computation on multiple inputs encrypted under different secret keys from different data owners, without using public key FHE. Building on TMFH-DEM, we propose a lightweight privacy-preserving distributed recommender system, which flexibly returns all the recommended items with certain predicted ratings for all target users. Formal security proof shows that our proposal achieves both user historical rating data privacy and recommendation result privacy. Findings from our evaluations demonstrate its practicability in terms of scalability, recommendation accuracy, computational and communication efficiency.
{"title":"Lightweight Privacy-preserving Distributed Recommender System using Tag-based Multikey Fully Homomorphic Data Encapsulation","authors":"Jun Zhou, Guobin Gao, Zhenfu Cao, K. Choo, Xiaolei Dong","doi":"10.1109/tdsc.2023.3243598","DOIUrl":"https://doi.org/10.1109/tdsc.2023.3243598","url":null,"abstract":"Recommender systems facilitate personalized service provision through the statistical analysis and model training of user historical data (e.g., browsing behavior, travel history, etc). To address the underpinning privacy implications associated with such systems, a number of privacy-preserving recommendation approaches have been presented. There are, however, limitations in many of these approaches. For example, approaches that apply public key (fully) homomorphic encryption (FHE) on different users. historical ratings under a unique public key of a target recommendation user incur significant computational overheads on resource-constrained local users and may not be scalable. On the other hand, approaches without utilizing public key FHE can neither resist chosen ciphertext attack (CCA), nor be straightforwardly applied to the setting of distributed servers. In this paper, a lightweight privacy-preserving distributed recommender system is proposed. Specifically, we present a new cryptographic primitive (i.e., tag-based multikey fully homomorphic data encapsulation mechanism; TMFH-DEM) designed to achieve CCA security for both input privacy and result privacy. TMFH-DEM enables a set of distributed servers to collaboratively execute efficient privacy-preserving outsourced computation on multiple inputs encrypted under different secret keys from different data owners, without using public key FHE. Building on TMFH-DEM, we propose a lightweight privacy-preserving distributed recommender system, which flexibly returns all the recommended items with certain predicted ratings for all target users. Formal security proof shows that our proposal achieves both user historical rating data privacy and recommendation result privacy. Findings from our evaluations demonstrate its practicability in terms of scalability, recommendation accuracy, computational and communication efficiency.","PeriodicalId":13047,"journal":{"name":"IEEE Transactions on Dependable and Secure Computing","volume":null,"pages":null},"PeriodicalIF":7.3,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62411425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
OPUPO: Defending Against Membership Inference Attacks With Order-Preserving and Utility-Preserving Obfuscation OPUPO:用保序和保效用混淆防御隶属推理攻击
2区 计算机科学 Q1 Computer Science Pub Date : 2023-11-01 DOI: 10.1109/tdsc.2022.3232111
Yaru Liu, Hongcheng Li, Gang Huang, Wei Hua
In this work, we present OPUPO to protect machine learning classifiers against black-box membership inference attacks by alleviating the prediction difference between training and non-training samples. Specifically, we apply order-preserving and utility-preserving obfuscation to prediction vectors. The order-preserving constraint strictly maintains the order of confidence scores in the prediction vectors, guaranteeing that the model's classification accuracy is not affected. The utility-preserving constraint, on the other hand, enables adaptive distortions to the prediction vectors in order to protect their utility. Moreover, OPUPO is proved to be adversary resistant that even well-informed defense-aware adversaries cannot restore the original prediction vectors to bypass the defense. We evaluate OPUPO on machine learning and deep learning classifiers trained with four popular datasets. Experiments verify that OPUPO can effectively defend against state-of-the-art attack techniques with negligible computation overhead. In specific, the inference accuracy could be reduced from as high as 87.66% to around 50%, i.e., random guess, and the prediction time will increase by only 0.44% on average. The experiments also show that OPUPO could achieve better privacy-utility trade-off than existing defenses.
在这项工作中,我们提出了OPUPO,通过减轻训练样本和非训练样本之间的预测差异来保护机器学习分类器免受黑盒成员推理攻击。具体来说,我们将保序和保效用混淆应用于预测向量。保序约束严格保持预测向量置信度分数的顺序,保证模型的分类精度不受影响。另一方面,效用保持约束允许对预测向量进行自适应扭曲,以保护其效用。此外,OPUPO被证明是抗攻击的,即使是消息灵通的防御意识的对手也无法恢复原始的预测向量来绕过防御。我们在机器学习和深度学习分类器上用四个流行的数据集来评估OPUPO。实验证明,OPUPO可以有效防御最先进的攻击技术,而计算开销可以忽略不计。具体来说,推理准确率可以从高达87.66%降低到50%左右,即随机猜测,预测时间平均只会增加0.44%。实验还表明,OPUPO可以实现比现有防御更好的隐私效用权衡。
{"title":"<b>OPUPO</b>: Defending Against Membership Inference Attacks With <b>O</b>rder-<b>P</b>reserving and <b>U</b>tility-<b>P</b>reserving <b>O</b>bfuscation","authors":"Yaru Liu, Hongcheng Li, Gang Huang, Wei Hua","doi":"10.1109/tdsc.2022.3232111","DOIUrl":"https://doi.org/10.1109/tdsc.2022.3232111","url":null,"abstract":"In this work, we present OPUPO to protect machine learning classifiers against black-box membership inference attacks by alleviating the prediction difference between training and non-training samples. Specifically, we apply order-preserving and utility-preserving obfuscation to prediction vectors. The order-preserving constraint strictly maintains the order of confidence scores in the prediction vectors, guaranteeing that the model's classification accuracy is not affected. The utility-preserving constraint, on the other hand, enables adaptive distortions to the prediction vectors in order to protect their utility. Moreover, OPUPO is proved to be adversary resistant that even well-informed defense-aware adversaries cannot restore the original prediction vectors to bypass the defense. We evaluate OPUPO on machine learning and deep learning classifiers trained with four popular datasets. Experiments verify that OPUPO can effectively defend against state-of-the-art attack techniques with negligible computation overhead. In specific, the inference accuracy could be reduced from as high as 87.66% to around 50%, i.e., random guess, and the prediction time will increase by only 0.44% on average. The experiments also show that OPUPO could achieve better privacy-utility trade-off than existing defenses.","PeriodicalId":13047,"journal":{"name":"IEEE Transactions on Dependable and Secure Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135566435","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Holistic Implicit Factor Evaluation of Model Extraction Attacks 模型提取攻击的整体隐式因子评估
2区 计算机科学 Q1 Computer Science Pub Date : 2023-11-01 DOI: 10.1109/tdsc.2022.3231271
Anli Yan, Hongyang Yan, Li Hu, Xiaozhang Liu, Teng Huang
Model extraction attacks (MEAs) allow adversaries to replicate a surrogate model analogous to the target model's decision pattern. While several attacks and defenses have been studied in-depth, the underlying reasons behind our susceptibility to them often remain unclear. Analyzing these implication influence factors helps to promote secure deep learning (DL) systems, it requires studying extraction attacks in various scenarios to determine the success of different attacks and the hallmarks of DLs. However, understanding, implementing, and evaluating even a single attack requires extremely high technical effort, making it impractical to study the vast number of unique extraction attack scenarios. To this end, we present a first-of-its-kind holistic evaluation of implication factors for MEAs which relies on the attack process abstracted from state-of-the-art MEAs. Specifically, we concentrate on four perspectives. we consider the impact of the task accuracy, model architecture, and robustness of the target model on MEAs, as well as the impact of the model architecture of the surrogate model on MEAs. Our empirical evaluation includes an ablation study over sixteen model architectures and four image datasets. Surprisingly, our study shows that improving the robustness of the target model via adversarial training is more vulnerable to model extraction attacks.
模型提取攻击(mea)允许攻击者复制与目标模型的决策模式类似的代理模型。虽然对几种攻击和防御进行了深入研究,但我们对它们易感性背后的潜在原因往往尚不清楚。分析这些隐含的影响因素有助于提升深度学习(DL)系统的安全性,这需要研究各种场景下的抽取攻击,以确定不同攻击的成功和DL的特征。然而,理解、实现和评估单个攻击都需要极高的技术努力,这使得研究大量独特的提取攻击场景变得不切实际。为此,我们提出了一种基于从最先进的mea中抽象出来的攻击过程的mea隐含因素的首次整体评估。具体来说,我们关注四个方面。我们考虑了目标模型的任务精度、模型架构和鲁棒性对mea的影响,以及代理模型的模型架构对mea的影响。我们的实证评估包括对16个模型架构和4个图像数据集的消融研究。令人惊讶的是,我们的研究表明,通过对抗性训练来提高目标模型的鲁棒性更容易受到模型提取攻击。
{"title":"Holistic Implicit Factor Evaluation of Model Extraction Attacks","authors":"Anli Yan, Hongyang Yan, Li Hu, Xiaozhang Liu, Teng Huang","doi":"10.1109/tdsc.2022.3231271","DOIUrl":"https://doi.org/10.1109/tdsc.2022.3231271","url":null,"abstract":"Model extraction attacks (MEAs) allow adversaries to replicate a surrogate model analogous to the target model's decision pattern. While several attacks and defenses have been studied in-depth, the underlying reasons behind our susceptibility to them often remain unclear. Analyzing these implication influence factors helps to promote secure deep learning (DL) systems, it requires studying extraction attacks in various scenarios to determine the success of different attacks and the hallmarks of DLs. However, understanding, implementing, and evaluating even a single attack requires extremely high technical effort, making it impractical to study the vast number of unique extraction attack scenarios. To this end, we present a first-of-its-kind holistic evaluation of implication factors for MEAs which relies on the attack process abstracted from state-of-the-art MEAs. Specifically, we concentrate on four perspectives. we consider the impact of the task accuracy, model architecture, and robustness of the target model on MEAs, as well as the impact of the model architecture of the surrogate model on MEAs. Our empirical evaluation includes an ablation study over sixteen model architectures and four image datasets. Surprisingly, our study shows that improving the robustness of the target model via adversarial training is more vulnerable to model extraction attacks.","PeriodicalId":13047,"journal":{"name":"IEEE Transactions on Dependable and Secure Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135610552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-Modal Side Channel Data Driven Golden-Free Detection of Software and Firmware Trojans 多模态侧通道数据驱动的无金检测软件和固件木马
IF 7.3 2区 计算机科学 Q1 Computer Science Pub Date : 2023-11-01 DOI: 10.1109/tdsc.2022.3231632
P. Krishnamurthy, Virinchi Roy Surabhi, H. Pearce, R. Karri, F. Khorrami
This study explores data-driven detection of firmware/software Trojans in embedded systems without golden models. We consider embedded systems such as single board computers and industrial controllers. While prior literature considers side channel based anomaly detection, this study addresses the following central question: is anomaly detection feasible when using low-fidelity simulated data without using data from a known-good (golden) system? To study this question, we use data from a simulator-based proxy as a stand-in for unavailable golden data from a known-good system. Using data generated from the simulator, one-class classifier machine learning models are applied to detect discrepancies against expected side channel signal patterns and their inter-relationships. Side channels fused for Trojan detection include multi-modal side channel measurement data (such as Hardware Performance Counters, processor load, temperature, and power consumption). Additionally, fuzzing is introduced to increase detectability of Trojans. To experimentally evaluate the approach, we generate low-fidelity data using a simulator implemented with a component-based model and an information bottleneck based on Gaussian stochastic models. We consider example Trojans and show that fuzzing-aided golden-free Trojan detection is feasible using simulated data as a baseline.
{"title":"Multi-Modal Side Channel Data Driven Golden-Free Detection of Software and Firmware Trojans","authors":"P. Krishnamurthy, Virinchi Roy Surabhi, H. Pearce, R. Karri, F. Khorrami","doi":"10.1109/tdsc.2022.3231632","DOIUrl":"https://doi.org/10.1109/tdsc.2022.3231632","url":null,"abstract":"This study explores data-driven detection of firmware/software Trojans in embedded systems without golden models. We consider embedded systems such as single board computers and industrial controllers. While prior literature considers side channel based anomaly detection, this study addresses the following central question: is anomaly detection feasible when using low-fidelity simulated data without using data from a known-good (golden) system? To study this question, we use data from a simulator-based proxy as a stand-in for unavailable golden data from a known-good system. Using data generated from the simulator, one-class classifier machine learning models are applied to detect discrepancies against expected side channel signal patterns and their inter-relationships. Side channels fused for Trojan detection include multi-modal side channel measurement data (such as Hardware Performance Counters, processor load, temperature, and power consumption). Additionally, fuzzing is introduced to increase detectability of Trojans. To experimentally evaluate the approach, we generate low-fidelity data using a simulator implemented with a component-based model and an information bottleneck based on Gaussian stochastic models. We consider example Trojans and show that fuzzing-aided golden-free Trojan detection is feasible using simulated data as a baseline.","PeriodicalId":13047,"journal":{"name":"IEEE Transactions on Dependable and Secure Computing","volume":null,"pages":null},"PeriodicalIF":7.3,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62407552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
WF-MTD: Evolutionary Decision Method for Moving Target Defense Based on Wright-Fisher Process WF-MTD:基于Wright-Fisher过程的运动目标防御进化决策方法
IF 7.3 2区 计算机科学 Q1 Computer Science Pub Date : 2023-11-01 DOI: 10.1109/tdsc.2022.3232537
Jinglei Tan, Hui Jin, Hao Hu, Ruiqin Hu, Hongqi Zhang, Hengwei Zhang
The limitations of the professional knowledge and cognitive capabilities of both attackers and defenders mean that moving target attack-defense conflicts are not completely rational, which makes it difficult to select optimal moving target defense strategies difficult for use in real-world attack-defense scenarios. Starting from the imperfect rationality of both attack-defense, we construct a Wright-Fisher process-based moving target defense strategy evolution model called WF-MTD. In our method, we introduce rationality parameters to describe the strategy learning capabilities of both the attacker and the defender. By solving for the evolutionarily stable equilibrium, we develop a method for selecting the optimal defense strategy for moving targets and describe the evolution trajectories of the attack-defense strategies. Our experimental results in our example of a typical network information system show that WF-MTD selects appropriate MTD strategies in different states along different attack paths, with good effectiveness and broad applicability. In addition, compared with no hopping strategy, fixed periodic route hopping strategy, and random periodic route hopping strategy, the route hopping strategy based on WF-MTD increase defense payoffs by 58.7%, 27.6%, and 24.6%, respectively.
{"title":"WF-MTD: Evolutionary Decision Method for Moving Target Defense Based on Wright-Fisher Process","authors":"Jinglei Tan, Hui Jin, Hao Hu, Ruiqin Hu, Hongqi Zhang, Hengwei Zhang","doi":"10.1109/tdsc.2022.3232537","DOIUrl":"https://doi.org/10.1109/tdsc.2022.3232537","url":null,"abstract":"The limitations of the professional knowledge and cognitive capabilities of both attackers and defenders mean that moving target attack-defense conflicts are not completely rational, which makes it difficult to select optimal moving target defense strategies difficult for use in real-world attack-defense scenarios. Starting from the imperfect rationality of both attack-defense, we construct a Wright-Fisher process-based moving target defense strategy evolution model called WF-MTD. In our method, we introduce rationality parameters to describe the strategy learning capabilities of both the attacker and the defender. By solving for the evolutionarily stable equilibrium, we develop a method for selecting the optimal defense strategy for moving targets and describe the evolution trajectories of the attack-defense strategies. Our experimental results in our example of a typical network information system show that WF-MTD selects appropriate MTD strategies in different states along different attack paths, with good effectiveness and broad applicability. In addition, compared with no hopping strategy, fixed periodic route hopping strategy, and random periodic route hopping strategy, the route hopping strategy based on WF-MTD increase defense payoffs by 58.7%, 27.6%, and 24.6%, respectively.","PeriodicalId":13047,"journal":{"name":"IEEE Transactions on Dependable and Secure Computing","volume":null,"pages":null},"PeriodicalIF":7.3,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62407844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
期刊
IEEE Transactions on Dependable and Secure Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1