Pub Date : 2023-11-01DOI: 10.1109/tdsc.2022.3230748
Kai Lehniger, P. Langendorfer
This paper presents Window Canaries, a novel approach to Stack Canaries for architectures with a register window that protects return addresses and stack pointers without the need of adding additional instruction to each potentially vulnerable function. Instead, placement and check of the canary word is moved to window exception handlers that are responsible to handle register window overflows and underflows. The approach offers low performance overhead while guaranteeing that return addresses are protected by stack buffer overflows without relying on a heuristic that decides which functions to instrument. The contributions of this paper are a complete implementation of the approach for the Xtensa LX architecture with register window option as well as a performance evaluation and discussion of advantages and drawbacks.
{"title":"Window Canaries: Re-thinking Stack Canaries for Architectures with Register Windows","authors":"Kai Lehniger, P. Langendorfer","doi":"10.1109/tdsc.2022.3230748","DOIUrl":"https://doi.org/10.1109/tdsc.2022.3230748","url":null,"abstract":"This paper presents Window Canaries, a novel approach to Stack Canaries for architectures with a register window that protects return addresses and stack pointers without the need of adding additional instruction to each potentially vulnerable function. Instead, placement and check of the canary word is moved to window exception handlers that are responsible to handle register window overflows and underflows. The approach offers low performance overhead while guaranteeing that return addresses are protected by stack buffer overflows without relying on a heuristic that decides which functions to instrument. The contributions of this paper are a complete implementation of the approach for the Xtensa LX architecture with register window option as well as a performance evaluation and discussion of advantages and drawbacks.","PeriodicalId":13047,"journal":{"name":"IEEE Transactions on Dependable and Secure Computing","volume":null,"pages":null},"PeriodicalIF":7.3,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62407602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-01DOI: 10.1109/tdsc.2022.3232484
Mingze He, Hongxia Wang, Fei Zhang, S. Abdullahi, Ling Yang
In recent years, online video sharing platforms have been widely available on social networks. To protect copyright and track the origins of these shared videos, some video watermarking methods have been proposed. However, their robustness performance is significantly degraded under geometric deformations, which destroy the synchronization between the watermark embedding and extraction. To this end, we propose a novel robust blind video watermarking scheme by embedding the watermark into low-order recursive Zernike moments. To reduce the time complexity, we give an efficient computation method by exploring the characteristics of video and moments. The moment accuracy is greatly improved due to the introduction of a recursive computation method. Furthermore, we design an optimization strategy to enhance visual quality and reduce distortion drift of watermarked videos by analyzing the radial basis function. The robustness of the proposed scheme is verified by different attacks, including geometric deformations, length-width ratio changes, temporal synchronization attacks, and combined attacks. In practical applications, the proposed scheme effectively resists processing from video sharing platforms and screenshots taken with smartphones and PC monitors. The watermark is extracted without the host video. Experimental results show that our proposed scheme outperforms other state-of-the-art schemes in terms of imperceptibility and robustness.
{"title":"Robust Blind Video Watermarking Against Geometric Deformations and Online Video Sharing Platform Processing","authors":"Mingze He, Hongxia Wang, Fei Zhang, S. Abdullahi, Ling Yang","doi":"10.1109/tdsc.2022.3232484","DOIUrl":"https://doi.org/10.1109/tdsc.2022.3232484","url":null,"abstract":"In recent years, online video sharing platforms have been widely available on social networks. To protect copyright and track the origins of these shared videos, some video watermarking methods have been proposed. However, their robustness performance is significantly degraded under geometric deformations, which destroy the synchronization between the watermark embedding and extraction. To this end, we propose a novel robust blind video watermarking scheme by embedding the watermark into low-order recursive Zernike moments. To reduce the time complexity, we give an efficient computation method by exploring the characteristics of video and moments. The moment accuracy is greatly improved due to the introduction of a recursive computation method. Furthermore, we design an optimization strategy to enhance visual quality and reduce distortion drift of watermarked videos by analyzing the radial basis function. The robustness of the proposed scheme is verified by different attacks, including geometric deformations, length-width ratio changes, temporal synchronization attacks, and combined attacks. In practical applications, the proposed scheme effectively resists processing from video sharing platforms and screenshots taken with smartphones and PC monitors. The watermark is extracted without the host video. Experimental results show that our proposed scheme outperforms other state-of-the-art schemes in terms of imperceptibility and robustness.","PeriodicalId":13047,"journal":{"name":"IEEE Transactions on Dependable and Secure Computing","volume":null,"pages":null},"PeriodicalIF":7.3,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62407672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-01DOI: 10.1109/tdsc.2022.3228236
Chao Lin, Xinyi Huang, Jianting Ning, D. He
The rapid development and wide application of blockchain not only highlight the significance of privacy protection (including anonymity and confidentiality) but also the necessity of auditability. While several ingenious schemes such as MiniLedger and traceable Monero supporting both privacy protection and auditability have been proposed, they either provide incomplete privacy protection (only achieving anonymity within a small set or only providing confidentiality but not anonymity), or involve additional auditing conditions such as reaching threshold transaction volume or requiring permissioned nodes to serve as the manager, or restrict to specific blockchain types such as Monero. To mitigate these issues, this article proposes a generic anonymous, confidential, and auditable transaction system (named ACA), which is compatible with both UTXO-based permissionless and permissioned blockchains. Core technologies of ACA include designed traceable anonymous key generation and publicly verifiable authorization mechanisms from existing cryptographic tools (i.e., public key encryption, partially homomorphic encryption, and accumulator) as well as the meticulous designed signatures of knowledge and smart contract. To demonstrate the entity of our proposal, we first prove its security including authenticity, anonymity, confidentiality and soundness, and then provide an instantiation to evaluate its performance. The final implementation and benchmarks show that our proposal can still gain performance advantage even adding more functionalities.
{"title":"ACA: Anonymous, Confidential and Auditable Transaction Systems for Blockchain","authors":"Chao Lin, Xinyi Huang, Jianting Ning, D. He","doi":"10.1109/tdsc.2022.3228236","DOIUrl":"https://doi.org/10.1109/tdsc.2022.3228236","url":null,"abstract":"The rapid development and wide application of blockchain not only highlight the significance of privacy protection (including anonymity and confidentiality) but also the necessity of auditability. While several ingenious schemes such as MiniLedger and traceable Monero supporting both privacy protection and auditability have been proposed, they either provide incomplete privacy protection (only achieving anonymity within a small set or only providing confidentiality but not anonymity), or involve additional auditing conditions such as reaching threshold transaction volume or requiring permissioned nodes to serve as the manager, or restrict to specific blockchain types such as Monero. To mitigate these issues, this article proposes a generic anonymous, confidential, and auditable transaction system (named ACA), which is compatible with both UTXO-based permissionless and permissioned blockchains. Core technologies of ACA include designed traceable anonymous key generation and publicly verifiable authorization mechanisms from existing cryptographic tools (i.e., public key encryption, partially homomorphic encryption, and accumulator) as well as the meticulous designed signatures of knowledge and smart contract. To demonstrate the entity of our proposal, we first prove its security including authenticity, anonymity, confidentiality and soundness, and then provide an instantiation to evaluate its performance. The final implementation and benchmarks show that our proposal can still gain performance advantage even adding more functionalities.","PeriodicalId":13047,"journal":{"name":"IEEE Transactions on Dependable and Secure Computing","volume":null,"pages":null},"PeriodicalIF":7.3,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62406683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-01DOI: 10.1109/tdsc.2022.3228302
Xiangrui Xu, Peng Liu, Wei Wang, Hongliang Ma, Bin Wang, Zhen Han, Yufei Han
Data reconstruction attack has become an emerging privacy threat to Federal Learning (FL), inspiring a rethinking of FL's ability to protect privacy. While existing data reconstruction attacks have shown some effective performance, prior arts rely on different strong assumptions to guide the reconstruction process. In this work, we propose a novel Conditional Generative Instance Reconstruction Attack (CGIR attack) that drops all these assumptions. Specifically, we propose a batch label inference attack in non-IID FL scenarios, where multiple images can share the same labels. Based on the inferred labels, we conduct a “coarse-to-fine” image reconstruction process that provides a stable and effective data reconstruction. In addition, we equip the generator with a label condition restriction so that the contents and the labels of the reconstructed images are consistent. Our extensive evaluation results on two model architectures and five image datasets show that without the auxiliary assumptions, the CGIR attack outperforms the prior arts, even for complex datasets, deep models, and large batch sizes. Furthermore, we evaluate several existing defense methods. The experimental results suggest that pruning gradients can be used as a strategy to mitigate privacy risks in FL if a model tolerates a slight accuracy loss.
{"title":"CGIR: Conditional Generative Instance Reconstruction Attacks against Federated Learning","authors":"Xiangrui Xu, Peng Liu, Wei Wang, Hongliang Ma, Bin Wang, Zhen Han, Yufei Han","doi":"10.1109/tdsc.2022.3228302","DOIUrl":"https://doi.org/10.1109/tdsc.2022.3228302","url":null,"abstract":"Data reconstruction attack has become an emerging privacy threat to Federal Learning (FL), inspiring a rethinking of FL's ability to protect privacy. While existing data reconstruction attacks have shown some effective performance, prior arts rely on different strong assumptions to guide the reconstruction process. In this work, we propose a novel Conditional Generative Instance Reconstruction Attack (CGIR attack) that drops all these assumptions. Specifically, we propose a batch label inference attack in non-IID FL scenarios, where multiple images can share the same labels. Based on the inferred labels, we conduct a “coarse-to-fine” image reconstruction process that provides a stable and effective data reconstruction. In addition, we equip the generator with a label condition restriction so that the contents and the labels of the reconstructed images are consistent. Our extensive evaluation results on two model architectures and five image datasets show that without the auxiliary assumptions, the CGIR attack outperforms the prior arts, even for complex datasets, deep models, and large batch sizes. Furthermore, we evaluate several existing defense methods. The experimental results suggest that pruning gradients can be used as a strategy to mitigate privacy risks in FL if a model tolerates a slight accuracy loss.","PeriodicalId":13047,"journal":{"name":"IEEE Transactions on Dependable and Secure Computing","volume":null,"pages":null},"PeriodicalIF":7.3,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62406842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-01DOI: 10.1109/tdsc.2023.3242737
Peizhuo Lv, Pan Li, Shengzhi Zhang, Kai Chen, Ruigang Liang, Hualong Ma, Yue Zhao, Yingjiu Li
Recently, stealing highly-valuable and large-scale deep neural network (DNN) models becomes pervasive. The stolen models may be re-commercialized, e.g., deployed in embedded devices, released in model markets, utilized in competitions, etc, which infringes the Intellectual Property (IP) of the original owner. Detecting IP infringement of the stolen models is quite challenging, even with the white-box access to them in the above scenarios, since they may have experienced fine-tuning, pruning, functionality-equivalent adjustment to destruct any embedded watermark. Furthermore, the adversaries may also attempt to extract the embedded watermark or forge a similar watermark to falsely claim ownership. In this article, we propose a novel DNN watermarking solution, named $HufuNet$HufuNet, to detect IP infringement of DNN models against the above mentioned attacks. Furthermore, HufuNet is the first one theoretically proved to guarantee robustness against fine-tuning attacks. We evaluate HufuNet rigorously on four benchmark datasets with five popular DNN models, including convolutional neural network (CNN) and recurrent neural network (RNN). The experiments and analysis demonstrate that HufuNet is highly robust against model fine-tuning/pruning, transfer learning, kernels cutoff/supplement, functionality-equivalent attacks and fraudulent ownership claims, thus highly promising to protect large-scale DNN models in the real world.
{"title":"A Robustness-Assured White-Box Watermark in Neural Networks","authors":"Peizhuo Lv, Pan Li, Shengzhi Zhang, Kai Chen, Ruigang Liang, Hualong Ma, Yue Zhao, Yingjiu Li","doi":"10.1109/tdsc.2023.3242737","DOIUrl":"https://doi.org/10.1109/tdsc.2023.3242737","url":null,"abstract":"Recently, stealing highly-valuable and large-scale deep neural network (DNN) models becomes pervasive. The stolen models may be re-commercialized, e.g., deployed in embedded devices, released in model markets, utilized in competitions, etc, which infringes the Intellectual Property (IP) of the original owner. Detecting IP infringement of the stolen models is quite challenging, even with the white-box access to them in the above scenarios, since they may have experienced fine-tuning, pruning, functionality-equivalent adjustment to destruct any embedded watermark. Furthermore, the adversaries may also attempt to extract the embedded watermark or forge a similar watermark to falsely claim ownership. In this article, we propose a novel DNN watermarking solution, named <inline-formula><tex-math notation=\"LaTeX\">$HufuNet$</tex-math><alternatives><mml:math><mml:mrow><mml:mi>H</mml:mi><mml:mi>u</mml:mi><mml:mi>f</mml:mi><mml:mi>u</mml:mi><mml:mi>N</mml:mi><mml:mi>e</mml:mi><mml:mi>t</mml:mi></mml:mrow></mml:math><inline-graphic xlink:href=\"peizhuo-ieq1-3242737.gif\"/></alternatives></inline-formula>, to detect IP infringement of DNN models against the above mentioned attacks. Furthermore, HufuNet is the first one theoretically proved to guarantee robustness against fine-tuning attacks. We evaluate HufuNet rigorously on four benchmark datasets with five popular DNN models, including convolutional neural network (CNN) and recurrent neural network (RNN). The experiments and analysis demonstrate that HufuNet is highly robust against model fine-tuning/pruning, transfer learning, kernels cutoff/supplement, functionality-equivalent attacks and fraudulent ownership claims, thus highly promising to protect large-scale DNN models in the real world.","PeriodicalId":13047,"journal":{"name":"IEEE Transactions on Dependable and Secure Computing","volume":null,"pages":null},"PeriodicalIF":7.3,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62411190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-01DOI: 10.1109/tdsc.2023.3243598
Jun Zhou, Guobin Gao, Zhenfu Cao, K. Choo, Xiaolei Dong
Recommender systems facilitate personalized service provision through the statistical analysis and model training of user historical data (e.g., browsing behavior, travel history, etc). To address the underpinning privacy implications associated with such systems, a number of privacy-preserving recommendation approaches have been presented. There are, however, limitations in many of these approaches. For example, approaches that apply public key (fully) homomorphic encryption (FHE) on different users. historical ratings under a unique public key of a target recommendation user incur significant computational overheads on resource-constrained local users and may not be scalable. On the other hand, approaches without utilizing public key FHE can neither resist chosen ciphertext attack (CCA), nor be straightforwardly applied to the setting of distributed servers. In this paper, a lightweight privacy-preserving distributed recommender system is proposed. Specifically, we present a new cryptographic primitive (i.e., tag-based multikey fully homomorphic data encapsulation mechanism; TMFH-DEM) designed to achieve CCA security for both input privacy and result privacy. TMFH-DEM enables a set of distributed servers to collaboratively execute efficient privacy-preserving outsourced computation on multiple inputs encrypted under different secret keys from different data owners, without using public key FHE. Building on TMFH-DEM, we propose a lightweight privacy-preserving distributed recommender system, which flexibly returns all the recommended items with certain predicted ratings for all target users. Formal security proof shows that our proposal achieves both user historical rating data privacy and recommendation result privacy. Findings from our evaluations demonstrate its practicability in terms of scalability, recommendation accuracy, computational and communication efficiency.
{"title":"Lightweight Privacy-preserving Distributed Recommender System using Tag-based Multikey Fully Homomorphic Data Encapsulation","authors":"Jun Zhou, Guobin Gao, Zhenfu Cao, K. Choo, Xiaolei Dong","doi":"10.1109/tdsc.2023.3243598","DOIUrl":"https://doi.org/10.1109/tdsc.2023.3243598","url":null,"abstract":"Recommender systems facilitate personalized service provision through the statistical analysis and model training of user historical data (e.g., browsing behavior, travel history, etc). To address the underpinning privacy implications associated with such systems, a number of privacy-preserving recommendation approaches have been presented. There are, however, limitations in many of these approaches. For example, approaches that apply public key (fully) homomorphic encryption (FHE) on different users. historical ratings under a unique public key of a target recommendation user incur significant computational overheads on resource-constrained local users and may not be scalable. On the other hand, approaches without utilizing public key FHE can neither resist chosen ciphertext attack (CCA), nor be straightforwardly applied to the setting of distributed servers. In this paper, a lightweight privacy-preserving distributed recommender system is proposed. Specifically, we present a new cryptographic primitive (i.e., tag-based multikey fully homomorphic data encapsulation mechanism; TMFH-DEM) designed to achieve CCA security for both input privacy and result privacy. TMFH-DEM enables a set of distributed servers to collaboratively execute efficient privacy-preserving outsourced computation on multiple inputs encrypted under different secret keys from different data owners, without using public key FHE. Building on TMFH-DEM, we propose a lightweight privacy-preserving distributed recommender system, which flexibly returns all the recommended items with certain predicted ratings for all target users. Formal security proof shows that our proposal achieves both user historical rating data privacy and recommendation result privacy. Findings from our evaluations demonstrate its practicability in terms of scalability, recommendation accuracy, computational and communication efficiency.","PeriodicalId":13047,"journal":{"name":"IEEE Transactions on Dependable and Secure Computing","volume":null,"pages":null},"PeriodicalIF":7.3,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62411425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-01DOI: 10.1109/tdsc.2022.3232111
Yaru Liu, Hongcheng Li, Gang Huang, Wei Hua
In this work, we present OPUPO to protect machine learning classifiers against black-box membership inference attacks by alleviating the prediction difference between training and non-training samples. Specifically, we apply order-preserving and utility-preserving obfuscation to prediction vectors. The order-preserving constraint strictly maintains the order of confidence scores in the prediction vectors, guaranteeing that the model's classification accuracy is not affected. The utility-preserving constraint, on the other hand, enables adaptive distortions to the prediction vectors in order to protect their utility. Moreover, OPUPO is proved to be adversary resistant that even well-informed defense-aware adversaries cannot restore the original prediction vectors to bypass the defense. We evaluate OPUPO on machine learning and deep learning classifiers trained with four popular datasets. Experiments verify that OPUPO can effectively defend against state-of-the-art attack techniques with negligible computation overhead. In specific, the inference accuracy could be reduced from as high as 87.66% to around 50%, i.e., random guess, and the prediction time will increase by only 0.44% on average. The experiments also show that OPUPO could achieve better privacy-utility trade-off than existing defenses.
{"title":"<b>OPUPO</b>: Defending Against Membership Inference Attacks With <b>O</b>rder-<b>P</b>reserving and <b>U</b>tility-<b>P</b>reserving <b>O</b>bfuscation","authors":"Yaru Liu, Hongcheng Li, Gang Huang, Wei Hua","doi":"10.1109/tdsc.2022.3232111","DOIUrl":"https://doi.org/10.1109/tdsc.2022.3232111","url":null,"abstract":"In this work, we present OPUPO to protect machine learning classifiers against black-box membership inference attacks by alleviating the prediction difference between training and non-training samples. Specifically, we apply order-preserving and utility-preserving obfuscation to prediction vectors. The order-preserving constraint strictly maintains the order of confidence scores in the prediction vectors, guaranteeing that the model's classification accuracy is not affected. The utility-preserving constraint, on the other hand, enables adaptive distortions to the prediction vectors in order to protect their utility. Moreover, OPUPO is proved to be adversary resistant that even well-informed defense-aware adversaries cannot restore the original prediction vectors to bypass the defense. We evaluate OPUPO on machine learning and deep learning classifiers trained with four popular datasets. Experiments verify that OPUPO can effectively defend against state-of-the-art attack techniques with negligible computation overhead. In specific, the inference accuracy could be reduced from as high as 87.66% to around 50%, i.e., random guess, and the prediction time will increase by only 0.44% on average. The experiments also show that OPUPO could achieve better privacy-utility trade-off than existing defenses.","PeriodicalId":13047,"journal":{"name":"IEEE Transactions on Dependable and Secure Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135566435","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-01DOI: 10.1109/tdsc.2022.3231271
Anli Yan, Hongyang Yan, Li Hu, Xiaozhang Liu, Teng Huang
Model extraction attacks (MEAs) allow adversaries to replicate a surrogate model analogous to the target model's decision pattern. While several attacks and defenses have been studied in-depth, the underlying reasons behind our susceptibility to them often remain unclear. Analyzing these implication influence factors helps to promote secure deep learning (DL) systems, it requires studying extraction attacks in various scenarios to determine the success of different attacks and the hallmarks of DLs. However, understanding, implementing, and evaluating even a single attack requires extremely high technical effort, making it impractical to study the vast number of unique extraction attack scenarios. To this end, we present a first-of-its-kind holistic evaluation of implication factors for MEAs which relies on the attack process abstracted from state-of-the-art MEAs. Specifically, we concentrate on four perspectives. we consider the impact of the task accuracy, model architecture, and robustness of the target model on MEAs, as well as the impact of the model architecture of the surrogate model on MEAs. Our empirical evaluation includes an ablation study over sixteen model architectures and four image datasets. Surprisingly, our study shows that improving the robustness of the target model via adversarial training is more vulnerable to model extraction attacks.
{"title":"Holistic Implicit Factor Evaluation of Model Extraction Attacks","authors":"Anli Yan, Hongyang Yan, Li Hu, Xiaozhang Liu, Teng Huang","doi":"10.1109/tdsc.2022.3231271","DOIUrl":"https://doi.org/10.1109/tdsc.2022.3231271","url":null,"abstract":"Model extraction attacks (MEAs) allow adversaries to replicate a surrogate model analogous to the target model's decision pattern. While several attacks and defenses have been studied in-depth, the underlying reasons behind our susceptibility to them often remain unclear. Analyzing these implication influence factors helps to promote secure deep learning (DL) systems, it requires studying extraction attacks in various scenarios to determine the success of different attacks and the hallmarks of DLs. However, understanding, implementing, and evaluating even a single attack requires extremely high technical effort, making it impractical to study the vast number of unique extraction attack scenarios. To this end, we present a first-of-its-kind holistic evaluation of implication factors for MEAs which relies on the attack process abstracted from state-of-the-art MEAs. Specifically, we concentrate on four perspectives. we consider the impact of the task accuracy, model architecture, and robustness of the target model on MEAs, as well as the impact of the model architecture of the surrogate model on MEAs. Our empirical evaluation includes an ablation study over sixteen model architectures and four image datasets. Surprisingly, our study shows that improving the robustness of the target model via adversarial training is more vulnerable to model extraction attacks.","PeriodicalId":13047,"journal":{"name":"IEEE Transactions on Dependable and Secure Computing","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135610552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-01DOI: 10.1109/tdsc.2022.3231632
P. Krishnamurthy, Virinchi Roy Surabhi, H. Pearce, R. Karri, F. Khorrami
This study explores data-driven detection of firmware/software Trojans in embedded systems without golden models. We consider embedded systems such as single board computers and industrial controllers. While prior literature considers side channel based anomaly detection, this study addresses the following central question: is anomaly detection feasible when using low-fidelity simulated data without using data from a known-good (golden) system? To study this question, we use data from a simulator-based proxy as a stand-in for unavailable golden data from a known-good system. Using data generated from the simulator, one-class classifier machine learning models are applied to detect discrepancies against expected side channel signal patterns and their inter-relationships. Side channels fused for Trojan detection include multi-modal side channel measurement data (such as Hardware Performance Counters, processor load, temperature, and power consumption). Additionally, fuzzing is introduced to increase detectability of Trojans. To experimentally evaluate the approach, we generate low-fidelity data using a simulator implemented with a component-based model and an information bottleneck based on Gaussian stochastic models. We consider example Trojans and show that fuzzing-aided golden-free Trojan detection is feasible using simulated data as a baseline.
{"title":"Multi-Modal Side Channel Data Driven Golden-Free Detection of Software and Firmware Trojans","authors":"P. Krishnamurthy, Virinchi Roy Surabhi, H. Pearce, R. Karri, F. Khorrami","doi":"10.1109/tdsc.2022.3231632","DOIUrl":"https://doi.org/10.1109/tdsc.2022.3231632","url":null,"abstract":"This study explores data-driven detection of firmware/software Trojans in embedded systems without golden models. We consider embedded systems such as single board computers and industrial controllers. While prior literature considers side channel based anomaly detection, this study addresses the following central question: is anomaly detection feasible when using low-fidelity simulated data without using data from a known-good (golden) system? To study this question, we use data from a simulator-based proxy as a stand-in for unavailable golden data from a known-good system. Using data generated from the simulator, one-class classifier machine learning models are applied to detect discrepancies against expected side channel signal patterns and their inter-relationships. Side channels fused for Trojan detection include multi-modal side channel measurement data (such as Hardware Performance Counters, processor load, temperature, and power consumption). Additionally, fuzzing is introduced to increase detectability of Trojans. To experimentally evaluate the approach, we generate low-fidelity data using a simulator implemented with a component-based model and an information bottleneck based on Gaussian stochastic models. We consider example Trojans and show that fuzzing-aided golden-free Trojan detection is feasible using simulated data as a baseline.","PeriodicalId":13047,"journal":{"name":"IEEE Transactions on Dependable and Secure Computing","volume":null,"pages":null},"PeriodicalIF":7.3,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62407552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The limitations of the professional knowledge and cognitive capabilities of both attackers and defenders mean that moving target attack-defense conflicts are not completely rational, which makes it difficult to select optimal moving target defense strategies difficult for use in real-world attack-defense scenarios. Starting from the imperfect rationality of both attack-defense, we construct a Wright-Fisher process-based moving target defense strategy evolution model called WF-MTD. In our method, we introduce rationality parameters to describe the strategy learning capabilities of both the attacker and the defender. By solving for the evolutionarily stable equilibrium, we develop a method for selecting the optimal defense strategy for moving targets and describe the evolution trajectories of the attack-defense strategies. Our experimental results in our example of a typical network information system show that WF-MTD selects appropriate MTD strategies in different states along different attack paths, with good effectiveness and broad applicability. In addition, compared with no hopping strategy, fixed periodic route hopping strategy, and random periodic route hopping strategy, the route hopping strategy based on WF-MTD increase defense payoffs by 58.7%, 27.6%, and 24.6%, respectively.
{"title":"WF-MTD: Evolutionary Decision Method for Moving Target Defense Based on Wright-Fisher Process","authors":"Jinglei Tan, Hui Jin, Hao Hu, Ruiqin Hu, Hongqi Zhang, Hengwei Zhang","doi":"10.1109/tdsc.2022.3232537","DOIUrl":"https://doi.org/10.1109/tdsc.2022.3232537","url":null,"abstract":"The limitations of the professional knowledge and cognitive capabilities of both attackers and defenders mean that moving target attack-defense conflicts are not completely rational, which makes it difficult to select optimal moving target defense strategies difficult for use in real-world attack-defense scenarios. Starting from the imperfect rationality of both attack-defense, we construct a Wright-Fisher process-based moving target defense strategy evolution model called WF-MTD. In our method, we introduce rationality parameters to describe the strategy learning capabilities of both the attacker and the defender. By solving for the evolutionarily stable equilibrium, we develop a method for selecting the optimal defense strategy for moving targets and describe the evolution trajectories of the attack-defense strategies. Our experimental results in our example of a typical network information system show that WF-MTD selects appropriate MTD strategies in different states along different attack paths, with good effectiveness and broad applicability. In addition, compared with no hopping strategy, fixed periodic route hopping strategy, and random periodic route hopping strategy, the route hopping strategy based on WF-MTD increase defense payoffs by 58.7%, 27.6%, and 24.6%, respectively.","PeriodicalId":13047,"journal":{"name":"IEEE Transactions on Dependable and Secure Computing","volume":null,"pages":null},"PeriodicalIF":7.3,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62407844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}