At ACSAC 2019, we introduced a new cryptographic primitive called proof of aliveness (PoA), allowing us to remotely and automatically track the running status (aliveness) of devices in the fields in cyber-physical systems. We proposed to use a one-way function (OWF) chain structure to build an efficient proof of aliveness, such that the prover sends every node on the OWF chain in a reverse order periodically. However, the finite nodes in OWF chains limited its practicality. We enhance our first PoA construction by linking multiple OWF chains together using a pseudo-random generator chain in our second PoA scheme. This enhancement allows us to integrate one-time signature (OTS) schemes into the structure of the second construction to realize the auto-replenishment of the aliveness proofs for continuous use without interruption for reinitialization. In this work, our primary motivation is to further improve our secondary PoA and auto-replenishment schemes. Instead of storing the tail nodes of multiple OWF chains on the verifier side, we use a Bloom Filter to compress them, reducing the storage cost by $ 4.7$4.7 times. Moreover, the OTS-based auto-replenishment solution cannot be applied to our first scheme, and it is not so efficient despite its standard model security. To overcome these limitations, we design a new auto-replenishment scheme from a hash-based commitment under the random oracle model in this work, which is much faster and can be used by both PoA schemes. Considering the implementation on a storage/memory-constrained device, we particularly study the strategies for efficiently generating proofs.
{"title":"Optimizing Proof of Aliveness in Cyber-Physical Systems","authors":"Zheng Yang, Chenglu Jin, Xuelian Cao, Marten van Dijk, Jianying Zhou","doi":"10.1109/TDSC.2023.3335188","DOIUrl":"https://doi.org/10.1109/TDSC.2023.3335188","url":null,"abstract":"At ACSAC 2019, we introduced a new cryptographic primitive called proof of aliveness (PoA), allowing us to remotely and automatically track the running status (aliveness) of devices in the fields in cyber-physical systems. We proposed to use a one-way function (OWF) chain structure to build an efficient proof of aliveness, such that the prover sends every node on the OWF chain in a reverse order periodically. However, the finite nodes in OWF chains limited its practicality. We enhance our first PoA construction by linking multiple OWF chains together using a pseudo-random generator chain in our second PoA scheme. This enhancement allows us to integrate one-time signature (OTS) schemes into the structure of the second construction to realize the auto-replenishment of the aliveness proofs for continuous use without interruption for reinitialization. In this work, our primary motivation is to further improve our secondary PoA and auto-replenishment schemes. Instead of storing the tail nodes of multiple OWF chains on the verifier side, we use a Bloom Filter to compress them, reducing the storage cost by <inline-formula><tex-math notation=\"LaTeX\">$ 4.7$</tex-math><alternatives><mml:math><mml:mrow><mml:mn>4</mml:mn><mml:mo>.</mml:mo><mml:mn>7</mml:mn></mml:mrow></mml:math><inline-graphic xlink:href=\"yang-ieq1-3335188.gif\"/></alternatives></inline-formula> times. Moreover, the OTS-based auto-replenishment solution cannot be applied to our first scheme, and it is not so efficient despite its standard model security. To overcome these limitations, we design a new auto-replenishment scheme from a hash-based commitment under the random oracle model in this work, which is much faster and can be used by both PoA schemes. Considering the implementation on a storage/memory-constrained device, we particularly study the strategies for efficiently generating proofs.","PeriodicalId":13047,"journal":{"name":"IEEE Transactions on Dependable and Secure Computing","volume":null,"pages":null},"PeriodicalIF":7.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141708382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01DOI: 10.1109/TDSC.2023.3343872
Mojtaba Rafiee
A multi-adjustable join ($text{M-Adjoin}$M-Adjoin) scheme [Khazaei-Rafiee, IEEE TDSC 2020], a generalization of $text{Adjoin}$Adjoin scheme [Popa-Zeldovich, MIT CSAIL TR 2012], is a symmetric-key primitive that enables a user to securely outsource his database to an external server, and later to issue join queries for a list of columns. In [Rafiee-Khazaei, IEEE TDSC 2021], based on the previously defined security notions for $text{Adjoin}$Adjoin [Mironov-Segev-Shahaf, TCC 2017], several security notions for $text{M-Adjoin}$M-Adjoin were proposed and their relationships were investigated. Constructing an $text{M-Adjoin}$M-Adjoin with indistinguishability security against adaptive adversary has remained a challenging problem so far. In this paper, we introduce two $text{M-Adjoin}$M-Adjoin constructions to achieve this strong security notion in the random oracle model. We prove the security of our constructions under Decisional Diffie-Hellman assumption in $mathbb {G}_{1}$G1 (DDH1) in the bilinear groups. Compared with previous constructions, despite having a higher security level, the computation and storage overheads do not increase.
{"title":"Multi-Adjustable Join Schemes With Adaptive Indistinguishably Security","authors":"Mojtaba Rafiee","doi":"10.1109/TDSC.2023.3343872","DOIUrl":"https://doi.org/10.1109/TDSC.2023.3343872","url":null,"abstract":"A multi-adjustable join (<inline-formula><tex-math notation=\"LaTeX\">$text{M-Adjoin}$</tex-math><alternatives><mml:math><mml:mtext>M-Adjoin</mml:mtext></mml:math><inline-graphic xlink:href=\"rafiee-ieq1-3343872.gif\"/></alternatives></inline-formula>) scheme [Khazaei-Rafiee, IEEE TDSC 2020], a generalization of <inline-formula><tex-math notation=\"LaTeX\">$text{Adjoin}$</tex-math><alternatives><mml:math><mml:mtext>Adjoin</mml:mtext></mml:math><inline-graphic xlink:href=\"rafiee-ieq2-3343872.gif\"/></alternatives></inline-formula> scheme [Popa-Zeldovich, MIT CSAIL TR 2012], is a symmetric-key primitive that enables a user to securely outsource his database to an external server, and later to issue join queries for a list of columns. In [Rafiee-Khazaei, IEEE TDSC 2021], based on the previously defined security notions for <inline-formula><tex-math notation=\"LaTeX\">$text{Adjoin}$</tex-math><alternatives><mml:math><mml:mtext>Adjoin</mml:mtext></mml:math><inline-graphic xlink:href=\"rafiee-ieq3-3343872.gif\"/></alternatives></inline-formula> [Mironov-Segev-Shahaf, TCC 2017], several security notions for <inline-formula><tex-math notation=\"LaTeX\">$text{M-Adjoin}$</tex-math><alternatives><mml:math><mml:mtext>M-Adjoin</mml:mtext></mml:math><inline-graphic xlink:href=\"rafiee-ieq4-3343872.gif\"/></alternatives></inline-formula> were proposed and their relationships were investigated. Constructing an <inline-formula><tex-math notation=\"LaTeX\">$text{M-Adjoin}$</tex-math><alternatives><mml:math><mml:mtext>M-Adjoin</mml:mtext></mml:math><inline-graphic xlink:href=\"rafiee-ieq5-3343872.gif\"/></alternatives></inline-formula> with indistinguishability security against adaptive adversary has remained a challenging problem so far. In this paper, we introduce two <inline-formula><tex-math notation=\"LaTeX\">$text{M-Adjoin}$</tex-math><alternatives><mml:math><mml:mtext>M-Adjoin</mml:mtext></mml:math><inline-graphic xlink:href=\"rafiee-ieq6-3343872.gif\"/></alternatives></inline-formula> constructions to achieve this strong security notion in the random oracle model. We prove the security of our constructions under Decisional Diffie-Hellman assumption in <inline-formula><tex-math notation=\"LaTeX\">$mathbb {G}_{1}$</tex-math><alternatives><mml:math><mml:msub><mml:mi mathvariant=\"double-struck\">G</mml:mi><mml:mn>1</mml:mn></mml:msub></mml:math><inline-graphic xlink:href=\"rafiee-ieq7-3343872.gif\"/></alternatives></inline-formula> (DDH1) in the bilinear groups. Compared with previous constructions, despite having a higher security level, the computation and storage overheads do not increase.","PeriodicalId":13047,"journal":{"name":"IEEE Transactions on Dependable and Secure Computing","volume":null,"pages":null},"PeriodicalIF":7.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141691615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01DOI: 10.1109/TDSC.2023.3346064
Hong Joo Lee, Yonghyun Ro
Deep Neural Networks (DNNs) have been widely successful in various domains, but they are vulnerable to adversarial attacks. Recent studies have also demonstrated that video recognition models are susceptible to adversarial perturbations, but the existing defense strategies in the image domain do not transfer well to the video domain due to the lack of considering temporal development and require a high computational cost for training video recognition models. This article, first, investigates the temporal vulnerability of video recognition models by quantifying the effect of temporal perturbations on the model's performance. Based on these investigations, we propose Defense Patterns (DPs) that can effectively protect video recognition models by adding them to the input video frames. The DPs are generated on top of a pre-trained model, eliminating the need for retraining or fine-tuning, which significantly reduces the computational cost. Experimental results on two benchmark datasets and various action recognition models demonstrate the effectiveness of the proposed method in enhancing the robustness of video recognition models.
{"title":"Defending Video Recognition Model Against Adversarial Perturbations via Defense Patterns","authors":"Hong Joo Lee, Yonghyun Ro","doi":"10.1109/TDSC.2023.3346064","DOIUrl":"https://doi.org/10.1109/TDSC.2023.3346064","url":null,"abstract":"Deep Neural Networks (DNNs) have been widely successful in various domains, but they are vulnerable to adversarial attacks. Recent studies have also demonstrated that video recognition models are susceptible to adversarial perturbations, but the existing defense strategies in the image domain do not transfer well to the video domain due to the lack of considering temporal development and require a high computational cost for training video recognition models. This article, first, investigates the temporal vulnerability of video recognition models by quantifying the effect of temporal perturbations on the model's performance. Based on these investigations, we propose Defense Patterns (DPs) that can effectively protect video recognition models by adding them to the input video frames. The DPs are generated on top of a pre-trained model, eliminating the need for retraining or fine-tuning, which significantly reduces the computational cost. Experimental results on two benchmark datasets and various action recognition models demonstrate the effectiveness of the proposed method in enhancing the robustness of video recognition models.","PeriodicalId":13047,"journal":{"name":"IEEE Transactions on Dependable and Secure Computing","volume":null,"pages":null},"PeriodicalIF":7.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141716341","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01DOI: 10.1109/TDSC.2023.3349180
Ziming Zhao, Zhuotao Liu, Huan Chen, Fan Zhang, Zhu Song, Zhaoxuan Li
Defending against Distributed Denial of Service (DDoS) attacks is a fundamental problem in the Internet. Over the past few decades, the research and industry communities have proposed a variety of solutions, from adding incremental capabilities to the existing Internet routing stack, to clean-slate future Internet architectures, and to widely deployed commercial DDoS prevention services. Yet a recent interview with over 100 security practitioners in multiple sectors reveals that existing solutions are still insufficient against, due to either unenforceable protocol deployment or non-comprehensive traffic filters. This seemingly endless arms race with attackers probably means that we need a fundamental paradigm shift. In this paper, we propose a new DDoS prevention paradigm named preference-driven and in-network enforced traffic shaping, aiming to explore the novel DDoS prevention norms that focus on delivering victim-preferred traffic rather than consistently chasing after the DDoS attacks. Towards this end, we propose ${sf DFNet}$DFNet, a novel DDoS prevention system that provides reliable delivery of victim-preferred traffic without full knowledge of DDoS attacks. At a very high level, the core innovative design of ${sf DFNet}$DFNet embraces the advances in Machine Learning (ML) and new network dataplane primitives, by encoding the victim’s traffic preference (in the form of complex ML models) into dataplane packet scheduling algorithms such that the victim-preferred traffic is forwarded with priority at line-speed, regardless of the attacker strategy. We implement a prototype of ${sf DFNet}$DFNet in 11,560 lines of code, and extensively evaluate it on our testbed. The results show that a single instance of${sf DFNet}$DFNet can forward 99.93% of victim-desired traffic when facing previously unseen attacks, while imposing less than 0.1% forwarding overhead on a dataplane with 80 Gbps upstream links and a 40 Gbps bottleneck.
{"title":"Effective DDoS Mitigation via ML-Driven In-Network Traffic Shaping","authors":"Ziming Zhao, Zhuotao Liu, Huan Chen, Fan Zhang, Zhu Song, Zhaoxuan Li","doi":"10.1109/TDSC.2023.3349180","DOIUrl":"https://doi.org/10.1109/TDSC.2023.3349180","url":null,"abstract":"Defending against Distributed Denial of Service (DDoS) attacks is a fundamental problem in the Internet. Over the past few decades, the research and industry communities have proposed a variety of solutions, from adding incremental capabilities to the existing Internet routing stack, to clean-slate future Internet architectures, and to widely deployed commercial DDoS prevention services. Yet a recent interview with over 100 security practitioners in multiple sectors reveals that existing solutions are <italic>still insufficient against</italic>, due to either unenforceable protocol deployment or non-comprehensive traffic filters. This seemingly endless arms race with attackers probably means that we need a fundamental paradigm shift. In this paper, we propose a new DDoS prevention paradigm named <italic>preference-driven and in-network enforced traffic shaping</italic>, aiming to explore the novel DDoS prevention norms that focus on delivering victim-preferred traffic rather than consistently chasing after the DDoS attacks. Towards this end, we propose <inline-formula><tex-math notation=\"LaTeX\">${sf DFNet}$</tex-math><alternatives><mml:math><mml:mi mathvariant=\"sans-serif\">DFNet</mml:mi></mml:math><inline-graphic xlink:href=\"zhao-ieq1-3349180.gif\"/></alternatives></inline-formula>, a novel DDoS prevention system that provides reliable delivery of victim-preferred traffic <italic>without</italic> full knowledge of DDoS attacks. At a very high level, the core innovative design of <inline-formula><tex-math notation=\"LaTeX\">${sf DFNet}$</tex-math><alternatives><mml:math><mml:mi mathvariant=\"sans-serif\">DFNet</mml:mi></mml:math><inline-graphic xlink:href=\"zhao-ieq2-3349180.gif\"/></alternatives></inline-formula> embraces the advances in Machine Learning (ML) and new network dataplane primitives, by <italic>encoding</italic> the victim’s traffic preference (in the form of complex ML models) into dataplane packet scheduling algorithms such that the victim-preferred traffic is forwarded with priority at line-speed, regardless of the attacker strategy. We implement a prototype of <inline-formula><tex-math notation=\"LaTeX\">${sf DFNet}$</tex-math><alternatives><mml:math><mml:mi mathvariant=\"sans-serif\">DFNet</mml:mi></mml:math><inline-graphic xlink:href=\"zhao-ieq3-3349180.gif\"/></alternatives></inline-formula> in 11,560 lines of code, and extensively evaluate it on our testbed. The results show that <italic>a single instance of</italic> <italic><inline-formula><tex-math notation=\"LaTeX\">${sf DFNet}$</tex-math><alternatives><mml:math><mml:mi mathvariant=\"sans-serif\">DFNet</mml:mi></mml:math><inline-graphic xlink:href=\"zhao-ieq4-3349180.gif\"/></alternatives></inline-formula></italic> can forward 99.93% of victim-desired traffic when facing previously unseen attacks, while imposing less than 0.1% forwarding overhead on a dataplane with 80 Gbps upstream links and a 40 Gbps bottleneck.","PeriodicalId":13047,"journal":{"name":"IEEE Transactions on Dependable and Secure Computing","volume":null,"pages":null},"PeriodicalIF":7.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141714655","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01DOI: 10.1109/TDSC.2023.3336857
Silvio E. Quincozes, Célio Albuquerque, Diego G. Passos, Daniel Mossé
Connected and digital electricity substations based on IEC–61850 standards enable novel applications. On the other hand, such connectivity also creates an extended attack surface. Therefore, Intrusion Detection Systems (IDSs) have become an essential component of safeguarding substations from malicious activities. However, in contrast to traditional information technology systems, there is a serious lack of realistic data for training, testing, and evaluating IDSs in smart grid scenarios. Many existing substation IDSs rely on datasets from other contexts or on proprietary datasets that do not allow reproducibility, validation, or performance comparison with competing algorithms. To address this issue, we propose the Efficacious Reproducer Engine for Network Operations (ERENO) synthetic traffic generation framework based on the IEC–61850 standard specifications. As an additional contribution, and as a proof-of-concept, we create and make available a suite of realistic IEC–61850 datasets that model 8 use cases, namely traffic for seven common attacks and one for normal network traffic. Based on those datasets, we further evaluate how enriched features combining raw data from the substation can significantly improve intrusion detection performance. Our results suggest that it can improve F1-Score up to 47.22% for masquerade attacks.
{"title":"ERENO: A Framework for Generating Realistic IEC–61850 Intrusion Detection Datasets for Smart Grids","authors":"Silvio E. Quincozes, Célio Albuquerque, Diego G. Passos, Daniel Mossé","doi":"10.1109/TDSC.2023.3336857","DOIUrl":"https://doi.org/10.1109/TDSC.2023.3336857","url":null,"abstract":"Connected and digital electricity substations based on IEC–61850 standards enable novel applications. On the other hand, such connectivity also creates an extended attack surface. Therefore, Intrusion Detection Systems (IDSs) have become an essential component of safeguarding substations from malicious activities. However, in contrast to traditional information technology systems, there is a serious lack of realistic data for training, testing, and evaluating IDSs in smart grid scenarios. Many existing substation IDSs rely on datasets from other contexts or on proprietary datasets that do not allow reproducibility, validation, or performance comparison with competing algorithms. To address this issue, we propose the Efficacious Reproducer Engine for Network Operations (ERENO) synthetic traffic generation framework based on the IEC–61850 standard specifications. As an additional contribution, and as a proof-of-concept, we create and make available a suite of realistic IEC–61850 datasets that model 8 use cases, namely traffic for seven common attacks and one for normal network traffic. Based on those datasets, we further evaluate how enriched features combining raw data from the substation can significantly improve intrusion detection performance. Our results suggest that it can improve F1-Score up to 47.22% for masquerade attacks.","PeriodicalId":13047,"journal":{"name":"IEEE Transactions on Dependable and Secure Computing","volume":null,"pages":null},"PeriodicalIF":7.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141696282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Federated Learning (FL) is widely used in various industries because it effectively addresses the predicament of isolated data island. However, eavesdroppers is capable of inferring user privacy from the gradients or models transmitted in FL. Homomorphic Encryption (HE) can be applied in FL to protect sensitive data owing to its computability over ciphertexts. However, traditional HE as a single-key system cannot prevent dishonest users from intercepting and decrypting the ciphertexts from cooperative users in FL. Guaranteeing privacy and efficiency in this multi-user scenario is still a challenging target. In this article, we propose a secure and efficient Federated Learning scheme (SecFed) based on multi-key HE to preserve user privacy and delegate some operations to TEE to improve efficiency while ensuring security. Specifically, we design the first TEE-based multi-key HE cryptosystem (EMK-BFV) to support privacy-preserving FL and optimize operation efficiency. Furthermore, we provide an offline protection mechanism to ensure the normal operation of system with disconnected participants. Finally, we give their security proofs and show their efficiency and superiority through comprehensive simulations and comparisons with existing schemes. SecFed offers a 3x performance improvement over TEE-based scheme and a 2x performance improvement over HE-based solution.
联合学习(FL)能有效解决数据孤岛的困境,因此被广泛应用于各行各业。然而,窃听者有能力从 FL 中传输的梯度或模型中推断出用户隐私。同态加密(Homorphic Encryption,HE)由于其对密码文本的可计算性,可以应用于 FL 来保护敏感数据。然而,传统的单密钥系统同态加密无法防止不诚实用户截获和解密 FL 中合作用户的密文。在这种多用户场景中如何保证隐私和效率仍然是一个具有挑战性的目标。在本文中,我们提出了一种基于多密钥 HE 的安全高效的联盟学习方案(SecFed),以保护用户隐私,并将一些操作委托给 TEE,从而在确保安全的同时提高效率。具体来说,我们设计了首个基于 TEE 的多密钥 HE 密码系统(EMK-BFV),以支持隐私保护 FL 并优化操作效率。此外,我们还提供了一种离线保护机制,以确保系统在参与者断开连接的情况下正常运行。最后,我们给出了它们的安全证明,并通过全面的模拟和与现有方案的比较,展示了它们的效率和优越性。SecFed 的性能比基于 TEE 的方案提高了 3 倍,比基于 HE 的方案提高了 2 倍。
{"title":"SecFed: A Secure and Efficient Federated Learning Based on Multi-Key Homomorphic Encryption","authors":"Yuxuan Cai, Wenxiu Ding, Yuxuan Xiao, Zheng Yan, Ximeng Liu, Zhiguo Wan","doi":"10.1109/TDSC.2023.3336977","DOIUrl":"https://doi.org/10.1109/TDSC.2023.3336977","url":null,"abstract":"Federated Learning (FL) is widely used in various industries because it effectively addresses the predicament of isolated data island. However, eavesdroppers is capable of inferring user privacy from the gradients or models transmitted in FL. Homomorphic Encryption (HE) can be applied in FL to protect sensitive data owing to its computability over ciphertexts. However, traditional HE as a single-key system cannot prevent dishonest users from intercepting and decrypting the ciphertexts from cooperative users in FL. Guaranteeing privacy and efficiency in this multi-user scenario is still a challenging target. In this article, we propose a secure and efficient Federated Learning scheme (SecFed) based on multi-key HE to preserve user privacy and delegate some operations to TEE to improve efficiency while ensuring security. Specifically, we design the first TEE-based multi-key HE cryptosystem (EMK-BFV) to support privacy-preserving FL and optimize operation efficiency. Furthermore, we provide an offline protection mechanism to ensure the normal operation of system with disconnected participants. Finally, we give their security proofs and show their efficiency and superiority through comprehensive simulations and comparisons with existing schemes. SecFed offers a 3x performance improvement over TEE-based scheme and a 2x performance improvement over HE-based solution.","PeriodicalId":13047,"journal":{"name":"IEEE Transactions on Dependable and Secure Computing","volume":null,"pages":null},"PeriodicalIF":7.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141716286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The primitive of verifiable data streaming (VDS) provides a secure data outsourcing solution for resource-constrained users, that is, they can stream their continuously-generated data items to untrusted servers while enabling publicly verifiable query and update. However, existing VDS schemes either require the server to store the authentication tags of all data items to support data query and auditing, or bind all data items into a constant-size tag to achieve optimal storage on the server side, but cannot achieve public auditing. To close this gap, in this article, we first design a novel authentication data structure, dubbed retrievable homomorphic verifiable tags (RHVTs), which allows users to aggregate the authentication tags of all data items into a constant-size tag, and enables them to retrieve the original tags from the aggregated tag when necessary. Based on this, we propose a compact verifiable and auditable data streaming (CVADS) scheme, which adopts a single-level authentication mechanism to achieve more efficient data append and update, as well as optimal storage and public auditing. For better robustness and performance, we introduce a nested dual-level authentication mechanism and propose a blockchain-based CVADS (BCVADS) scheme to achieve a distributed CVADS with self-auditing. Finally, we prove the security of our schemes in the random oracle model and demonstrate their practicality through a visual performance evaluation.
{"title":"Blockchain-Based Compact Verifiable Data Streaming With Self-Auditing","authors":"Guohua Tian, Jianghong Wei, Meixia Miao, Fuchun Guo, Willy Susilo, Xiaofeng Chen","doi":"10.1109/TDSC.2023.3340208","DOIUrl":"https://doi.org/10.1109/TDSC.2023.3340208","url":null,"abstract":"The primitive of verifiable data streaming (VDS) provides a secure data outsourcing solution for resource-constrained users, that is, they can stream their continuously-generated data items to untrusted servers while enabling publicly verifiable query and update. However, existing VDS schemes either require the server to store the authentication tags of all data items to support data query and auditing, or bind all data items into a constant-size tag to achieve optimal storage on the server side, but cannot achieve public auditing. To close this gap, in this article, we first design a novel authentication data structure, dubbed retrievable homomorphic verifiable tags (RHVTs), which allows users to aggregate the authentication tags of all data items into a constant-size tag, and enables them to retrieve the original tags from the aggregated tag when necessary. Based on this, we propose a compact verifiable and auditable data streaming (CVADS) scheme, which adopts a single-level authentication mechanism to achieve more efficient data append and update, as well as optimal storage and public auditing. For better robustness and performance, we introduce a nested dual-level authentication mechanism and propose a blockchain-based CVADS (BCVADS) scheme to achieve a distributed CVADS with self-auditing. Finally, we prove the security of our schemes in the random oracle model and demonstrate their practicality through a visual performance evaluation.","PeriodicalId":13047,"journal":{"name":"IEEE Transactions on Dependable and Secure Computing","volume":null,"pages":null},"PeriodicalIF":7.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141698572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zero-knowledge scalable transparent arguments of knowledge (zk-STARKs) are a promising approach to solving the blockchain scalability problem while maintaining security, decentralization and privacy. However, compared with zero-knowledge proofs with trusted setups deployed in existing scalability solutions, zk-STARKs are usually less efficient. In this paper, we introduce Ligerolight, an optimized zk-STARK for the arithmetic circuit satisfiability problem following the framework of Ligero (ACM CCS 2017) and Aurora (Eurocrypt 2019) based on interactive oracle proof, which could be used for blockchain scalability. Evaluations show that Ligerolight has performance advantages compared with existing zk-STARKs. The prover time is 30% faster than Aurora to generate proof for computing an authentication path of a Merkle tree with 32 leaves. The proof size is about 131 KB, one-tenth of Ligero and 50% smaller than Aurora. The verifier time is 2 times as fast as Aurora. Underlying Ligerolight is a new batch zero-knowledge inner product argument, allowing to prove multiple inner product relations once. Using this argument, we build a batch multivariate polynomial commitment with poly-logarithmic communication complexity and verification. This polynomial commitment is particularly efficient when opening multiple points in multiple polynomials at one time, and may be of independent interest in constructing scalability solutions.
{"title":"Ligerolight: Optimized IOP-Based Zero-Knowledge Argument for Blockchain Scalability","authors":"Zongyang Zhang, Weihan Li, Ximeng Liu, Xin Chen, Qihang Peng","doi":"10.1109/TDSC.2023.3336717","DOIUrl":"https://doi.org/10.1109/TDSC.2023.3336717","url":null,"abstract":"Zero-knowledge scalable transparent arguments of knowledge (zk-STARKs) are a promising approach to solving the blockchain scalability problem while maintaining security, decentralization and privacy. However, compared with zero-knowledge proofs with trusted setups deployed in existing scalability solutions, zk-STARKs are usually less efficient. In this paper, we introduce Ligerolight, an optimized zk-STARK for the arithmetic circuit satisfiability problem following the framework of Ligero (ACM CCS 2017) and Aurora (Eurocrypt 2019) based on interactive oracle proof, which could be used for blockchain scalability. Evaluations show that Ligerolight has performance advantages compared with existing zk-STARKs. The prover time is 30% faster than Aurora to generate proof for computing an authentication path of a Merkle tree with 32 leaves. The proof size is about 131 KB, one-tenth of Ligero and 50% smaller than Aurora. The verifier time is 2 times as fast as Aurora. Underlying Ligerolight is a new batch zero-knowledge inner product argument, allowing to prove multiple inner product relations once. Using this argument, we build a batch multivariate polynomial commitment with poly-logarithmic communication complexity and verification. This polynomial commitment is particularly efficient when opening multiple points in multiple polynomials at one time, and may be of independent interest in constructing scalability solutions.","PeriodicalId":13047,"journal":{"name":"IEEE Transactions on Dependable and Secure Computing","volume":null,"pages":null},"PeriodicalIF":7.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141712315","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01DOI: 10.1109/TDSC.2023.3341090
Eldor Abdukhamidov, Mohammad Abuhamad, Simon S. Woo, Eric Chan-Tin, Tamer Abuhmed
Deep learning methods have gained increasing attention in various applications due to their outstanding performance. For exploring how this high performance relates to the proper use of data artifacts and the accurate problem formulation of a given task, interpretation models have become a crucial component in developing deep learning-based systems. Interpretation models enable the understanding of the inner workings of deep learning models and offer a sense of security in detecting the misuse of artifacts in the input data. Similar to prediction models, interpretation models are also susceptible to adversarial inputs. This work introduces two attacks, AdvEdge and AdvEdge$^{+}$+, which deceive both the target deep learning model and the coupled interpretation model. We assess the effectiveness of proposed attacks against four deep learning model architectures coupled with four interpretation models that represent different categories of interpretation models. Our experiments include the implementation of attacks using various attack frameworks. We also explore the attack resilience against three general defense mechanisms and potential countermeasures. Our analysis shows the effectiveness of our attacks in terms of deceiving the deep learning models and their interpreters, and highlights insights to improve and circumvent the attacks.
{"title":"Hardening Interpretable Deep Learning Systems: Investigating Adversarial Threats and Defenses","authors":"Eldor Abdukhamidov, Mohammad Abuhamad, Simon S. Woo, Eric Chan-Tin, Tamer Abuhmed","doi":"10.1109/TDSC.2023.3341090","DOIUrl":"https://doi.org/10.1109/TDSC.2023.3341090","url":null,"abstract":"Deep learning methods have gained increasing attention in various applications due to their outstanding performance. For exploring how this high performance relates to the proper use of data artifacts and the accurate problem formulation of a given task, interpretation models have become a crucial component in developing deep learning-based systems. Interpretation models enable the understanding of the inner workings of deep learning models and offer a sense of security in detecting the misuse of artifacts in the input data. Similar to prediction models, interpretation models are also susceptible to adversarial inputs. This work introduces two attacks, AdvEdge and AdvEdge<inline-formula><tex-math notation=\"LaTeX\">$^{+}$</tex-math><alternatives><mml:math><mml:msup><mml:mrow/><mml:mo>+</mml:mo></mml:msup></mml:math><inline-graphic xlink:href=\"abuhmed-ieq1-3341090.gif\"/></alternatives></inline-formula>, which deceive both the target deep learning model and the coupled interpretation model. We assess the effectiveness of proposed attacks against four deep learning model architectures coupled with four interpretation models that represent different categories of interpretation models. Our experiments include the implementation of attacks using various attack frameworks. We also explore the attack resilience against three general defense mechanisms and potential countermeasures. Our analysis shows the effectiveness of our attacks in terms of deceiving the deep learning models and their interpreters, and highlights insights to improve and circumvent the attacks.","PeriodicalId":13047,"journal":{"name":"IEEE Transactions on Dependable and Secure Computing","volume":null,"pages":null},"PeriodicalIF":7.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141709361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01DOI: 10.1109/TDSC.2023.3347753
Yang Wei, Zhuo Ma, Zhuo Ma, Zhan Qin, Yang Liu, Bin Xiao, Xiuli Bi, Jianfeng Ma
Recent substitute training methods have utilized the concept of Generative Adversarial Networks (GANs) to implement data-free black-box attacks. Specifically, in designing the generators, the substitute training methods use a similar structure to the generators in GANs. However, this design approach ignores the potential situation that the generators in GANs operate under real data supervision, while the generators in substitute training methods lack such supervision. This difference in data-supervised conditions constrain the diversity of data generated by the substitute training methods, resulting in inadequate data to support effective training of the substitute model. This impacts the substitute model's ability to attack the target model further. Consequently, to solve the above issues, we propose three strategies to improve the attack success rates. For the generator, we first propose a dense projection space that projects the input noise into various latent feature spaces to diversify feature information. Then, we introduce a novel disguised natural color mode. This mode improves information exchange between the generator's output layer and previous layers, allowing for more diverse generated data. Besides, we present a regularization method for the substitute model, called noise-based balanced learning, to prevent the potential risk of overfitting due to the lack of diversity of the generated data. In the experimental analysis, extensive experiments are conducted to validate the effectiveness of these proposed strategies.
{"title":"Effectively Improving Data Diversity of Substitute Training for Data-Free Black-Box Attack","authors":"Yang Wei, Zhuo Ma, Zhuo Ma, Zhan Qin, Yang Liu, Bin Xiao, Xiuli Bi, Jianfeng Ma","doi":"10.1109/TDSC.2023.3347753","DOIUrl":"https://doi.org/10.1109/TDSC.2023.3347753","url":null,"abstract":"Recent substitute training methods have utilized the concept of Generative Adversarial Networks (GANs) to implement data-free black-box attacks. Specifically, in designing the generators, the substitute training methods use a similar structure to the generators in GANs. However, this design approach ignores the potential situation that the generators in GANs operate under real data supervision, while the generators in substitute training methods lack such supervision. This difference in data-supervised conditions constrain the diversity of data generated by the substitute training methods, resulting in inadequate data to support effective training of the substitute model. This impacts the substitute model's ability to attack the target model further. Consequently, to solve the above issues, we propose three strategies to improve the attack success rates. For the generator, we first propose a dense projection space that projects the input noise into various latent feature spaces to diversify feature information. Then, we introduce a novel disguised natural color mode. This mode improves information exchange between the generator's output layer and previous layers, allowing for more diverse generated data. Besides, we present a regularization method for the substitute model, called noise-based balanced learning, to prevent the potential risk of overfitting due to the lack of diversity of the generated data. In the experimental analysis, extensive experiments are conducted to validate the effectiveness of these proposed strategies.","PeriodicalId":13047,"journal":{"name":"IEEE Transactions on Dependable and Secure Computing","volume":null,"pages":null},"PeriodicalIF":7.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141709871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}