Chaofan Shou, Yuanyu Ke, Yupeng Yang, Qi Su, Or Dadosh, Assaf Eli, David Benchimol, Doudou Lu, Daniel Tong, Dex Chen, Zoey Tan, Jacob Chia, Koushik Sen, Wenke Lee
Billions of dollars have been lost due to vulnerabilities in smart contracts. To counteract this, researchers have proposed attack frontrunning protections designed to preempt malicious transactions by inserting "whitehat" transactions ahead of them to protect the assets. In this paper, we demonstrate that existing frontrunning protections have become ineffective in real-world scenarios. Specifically, we collected 158 recent real-world attack transactions and discovered that 141 of them can bypass state-of-the-art frontrunning protections. We systematically analyze these attacks and show how inherent limitations of existing frontrunning techniques hinder them from protecting valuable assets in the real world. We then propose a new approach involving 1) preemptive hijack, and 2) attack backrunning, which circumvent the existing limitations and can help protect assets before and after an attack. Our approach adapts the exploit used in the attack to the same or similar contracts before and after the attack to safeguard the assets. We conceptualize adapting exploits as a program repair problem and apply established techniques to implement our approach into a full-fledged framework, BACKRUNNER. Running on previous attacks in 2023, BACKRUNNER can successfully rescue more than $410M. In the real world, it has helped rescue over $11.2M worth of assets in 28 separate incidents within two months.
{"title":"BACKRUNNER: Mitigating Smart Contract Attacks in the Real World","authors":"Chaofan Shou, Yuanyu Ke, Yupeng Yang, Qi Su, Or Dadosh, Assaf Eli, David Benchimol, Doudou Lu, Daniel Tong, Dex Chen, Zoey Tan, Jacob Chia, Koushik Sen, Wenke Lee","doi":"arxiv-2409.06213","DOIUrl":"https://doi.org/arxiv-2409.06213","url":null,"abstract":"Billions of dollars have been lost due to vulnerabilities in smart contracts.\u0000To counteract this, researchers have proposed attack frontrunning protections\u0000designed to preempt malicious transactions by inserting \"whitehat\" transactions\u0000ahead of them to protect the assets. In this paper, we demonstrate that\u0000existing frontrunning protections have become ineffective in real-world\u0000scenarios. Specifically, we collected 158 recent real-world attack transactions\u0000and discovered that 141 of them can bypass state-of-the-art frontrunning\u0000protections. We systematically analyze these attacks and show how inherent\u0000limitations of existing frontrunning techniques hinder them from protecting\u0000valuable assets in the real world. We then propose a new approach involving 1)\u0000preemptive hijack, and 2) attack backrunning, which circumvent the existing\u0000limitations and can help protect assets before and after an attack. Our\u0000approach adapts the exploit used in the attack to the same or similar contracts\u0000before and after the attack to safeguard the assets. We conceptualize adapting\u0000exploits as a program repair problem and apply established techniques to\u0000implement our approach into a full-fledged framework, BACKRUNNER. Running on\u0000previous attacks in 2023, BACKRUNNER can successfully rescue more than $410M.\u0000In the real world, it has helped rescue over $11.2M worth of assets in 28\u0000separate incidents within two months.","PeriodicalId":501332,"journal":{"name":"arXiv - CS - Cryptography and Security","volume":"17 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142201446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yichuan Mo, Hui Huang, Mingjie Li, Ang Li, Yisen Wang
Diffusion models have achieved notable success in image generation, but they remain highly vulnerable to backdoor attacks, which compromise their integrity by producing specific undesirable outputs when presented with a pre-defined trigger. In this paper, we investigate how to protect diffusion models from this dangerous threat. Specifically, we propose TERD, a backdoor defense framework that builds unified modeling for current attacks, which enables us to derive an accessible reversed loss. A trigger reversion strategy is further employed: an initial approximation of the trigger through noise sampled from a prior distribution, followed by refinement through differential multi-step samplers. Additionally, with the reversed trigger, we propose backdoor detection from the noise space, introducing the first backdoor input detection approach for diffusion models and a novel model detection algorithm that calculates the KL divergence between reversed and benign distributions. Extensive evaluations demonstrate that TERD secures a 100% True Positive Rate (TPR) and True Negative Rate (TNR) across datasets of varying resolutions. TERD also demonstrates nice adaptability to other Stochastic Differential Equation (SDE)-based models. Our code is available at https://github.com/PKU-ML/TERD.
{"title":"TERD: A Unified Framework for Safeguarding Diffusion Models Against Backdoors","authors":"Yichuan Mo, Hui Huang, Mingjie Li, Ang Li, Yisen Wang","doi":"arxiv-2409.05294","DOIUrl":"https://doi.org/arxiv-2409.05294","url":null,"abstract":"Diffusion models have achieved notable success in image generation, but they\u0000remain highly vulnerable to backdoor attacks, which compromise their integrity\u0000by producing specific undesirable outputs when presented with a pre-defined\u0000trigger. In this paper, we investigate how to protect diffusion models from\u0000this dangerous threat. Specifically, we propose TERD, a backdoor defense\u0000framework that builds unified modeling for current attacks, which enables us to\u0000derive an accessible reversed loss. A trigger reversion strategy is further\u0000employed: an initial approximation of the trigger through noise sampled from a\u0000prior distribution, followed by refinement through differential multi-step\u0000samplers. Additionally, with the reversed trigger, we propose backdoor\u0000detection from the noise space, introducing the first backdoor input detection\u0000approach for diffusion models and a novel model detection algorithm that\u0000calculates the KL divergence between reversed and benign distributions.\u0000Extensive evaluations demonstrate that TERD secures a 100% True Positive Rate\u0000(TPR) and True Negative Rate (TNR) across datasets of varying resolutions. TERD\u0000also demonstrates nice adaptability to other Stochastic Differential Equation\u0000(SDE)-based models. Our code is available at https://github.com/PKU-ML/TERD.","PeriodicalId":501332,"journal":{"name":"arXiv - CS - Cryptography and Security","volume":"76 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142201612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cesare Caratozzolo, Valeria Rossi, Kamil Witek, Alberto Trombetta, Massimo Caccia
Generating random bit streams is required in various applications, most notably cyber-security. Ensuring high-quality and robust randomness is crucial to mitigate risks associated with predictability and system compromise. True random numbers provide the highest unpredictability levels. However, potential biases in the processes exploited for the random number generation must be carefully monitored. This paper reports the implementation and characterization of an on-line procedure for the detection of anomalies in a true random bit stream. It is based on the NIST Adaptive Proportion and Repetition Count tests, complemented by statistical analysis relying on the Monobit and RUNS. The procedure is firmware implemented and performed simultaneously with the bit stream generation, and providing as well an estimate of the entropy of the source. The experimental validation of the approach is performed upon the bit streams generated by a quantum, silicon-based entropy source.
{"title":"Efficient Quality Estimation of True Random Bit-streams","authors":"Cesare Caratozzolo, Valeria Rossi, Kamil Witek, Alberto Trombetta, Massimo Caccia","doi":"arxiv-2409.05543","DOIUrl":"https://doi.org/arxiv-2409.05543","url":null,"abstract":"Generating random bit streams is required in various applications, most\u0000notably cyber-security. Ensuring high-quality and robust randomness is crucial\u0000to mitigate risks associated with predictability and system compromise. True\u0000random numbers provide the highest unpredictability levels. However, potential\u0000biases in the processes exploited for the random number generation must be\u0000carefully monitored. This paper reports the implementation and characterization\u0000of an on-line procedure for the detection of anomalies in a true random bit\u0000stream. It is based on the NIST Adaptive Proportion and Repetition Count tests,\u0000complemented by statistical analysis relying on the Monobit and RUNS. The\u0000procedure is firmware implemented and performed simultaneously with the bit\u0000stream generation, and providing as well an estimate of the entropy of the\u0000source. The experimental validation of the approach is performed upon the bit\u0000streams generated by a quantum, silicon-based entropy source.","PeriodicalId":501332,"journal":{"name":"arXiv - CS - Cryptography and Security","volume":"5 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142201448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ethereum faces growing fraud threats. Current fraud detection methods, whether employing graph neural networks or sequence models, fail to consider the semantic information and similarity patterns within transactions. Moreover, these approaches do not leverage the potential synergistic benefits of combining both types of models. To address these challenges, we propose TLMG4Eth that combines a transaction language model with graph-based methods to capture semantic, similarity, and structural features of transaction data in Ethereum. We first propose a transaction language model that converts numerical transaction data into meaningful transaction sentences, enabling the model to learn explicit transaction semantics. Then, we propose a transaction attribute similarity graph to learn transaction similarity information, enabling us to capture intuitive insights into transaction anomalies. Additionally, we construct an account interaction graph to capture the structural information of the account transaction network. We employ a deep multi-head attention network to fuse transaction semantic and similarity embeddings, and ultimately propose a joint training approach for the multi-head attention network and the account interaction graph to obtain the synergistic benefits of both.
{"title":"Ethereum Fraud Detection via Joint Transaction Language Model and Graph Representation Learning","authors":"Yifan Jia, Yanbin Wang, Jianguo Sun, Yiwei Liu, Zhang Sheng, Ye Tian","doi":"arxiv-2409.07494","DOIUrl":"https://doi.org/arxiv-2409.07494","url":null,"abstract":"Ethereum faces growing fraud threats. Current fraud detection methods,\u0000whether employing graph neural networks or sequence models, fail to consider\u0000the semantic information and similarity patterns within transactions. Moreover,\u0000these approaches do not leverage the potential synergistic benefits of\u0000combining both types of models. To address these challenges, we propose\u0000TLMG4Eth that combines a transaction language model with graph-based methods to\u0000capture semantic, similarity, and structural features of transaction data in\u0000Ethereum. We first propose a transaction language model that converts numerical\u0000transaction data into meaningful transaction sentences, enabling the model to\u0000learn explicit transaction semantics. Then, we propose a transaction attribute\u0000similarity graph to learn transaction similarity information, enabling us to\u0000capture intuitive insights into transaction anomalies. Additionally, we\u0000construct an account interaction graph to capture the structural information of\u0000the account transaction network. We employ a deep multi-head attention network\u0000to fuse transaction semantic and similarity embeddings, and ultimately propose\u0000a joint training approach for the multi-head attention network and the account\u0000interaction graph to obtain the synergistic benefits of both.","PeriodicalId":501332,"journal":{"name":"arXiv - CS - Cryptography and Security","volume":"166 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142201640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The standard definition of differential privacy (DP) ensures that a mechanism's output distribution on adjacent datasets is indistinguishable. However, real-world implementations of DP can, and often do, reveal information through their runtime distributions, making them susceptible to timing attacks. In this work, we establish a general framework for ensuring differential privacy in the presence of timing side channels. We define a new notion of timing privacy, which captures programs that remain differentially private to an adversary that observes the program's runtime in addition to the output. Our framework enables chaining together component programs that are timing-stable followed by a random delay to obtain DP programs that achieve timing privacy. Importantly, our definitions allow for measuring timing privacy and output privacy using different privacy measures. We illustrate how to instantiate our framework by giving programs for standard DP computations in the RAM and Word RAM models of computation. Furthermore, we show how our framework can be realized in code through a natural extension of the OpenDP Programming Framework.
{"title":"A Framework for Differential Privacy Against Timing Attacks","authors":"Zachary Ratliff, Salil Vadhan","doi":"arxiv-2409.05623","DOIUrl":"https://doi.org/arxiv-2409.05623","url":null,"abstract":"The standard definition of differential privacy (DP) ensures that a\u0000mechanism's output distribution on adjacent datasets is indistinguishable.\u0000However, real-world implementations of DP can, and often do, reveal information\u0000through their runtime distributions, making them susceptible to timing attacks.\u0000In this work, we establish a general framework for ensuring differential\u0000privacy in the presence of timing side channels. We define a new notion of\u0000timing privacy, which captures programs that remain differentially private to\u0000an adversary that observes the program's runtime in addition to the output. Our\u0000framework enables chaining together component programs that are timing-stable\u0000followed by a random delay to obtain DP programs that achieve timing privacy.\u0000Importantly, our definitions allow for measuring timing privacy and output\u0000privacy using different privacy measures. We illustrate how to instantiate our\u0000framework by giving programs for standard DP computations in the RAM and Word\u0000RAM models of computation. Furthermore, we show how our framework can be\u0000realized in code through a natural extension of the OpenDP Programming\u0000Framework.","PeriodicalId":501332,"journal":{"name":"arXiv - CS - Cryptography and Security","volume":"21 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142201447","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Diffusion Models (DMs) achieve state-of-the-art synthesis results in image generation and have been applied to various fields. However, DMs sometimes seriously violate user privacy during usage, making the protection of privacy an urgent issue. Using traditional privacy computing schemes like Secure Multi-Party Computation (MPC) directly in DMs faces significant computation and communication challenges. To address these issues, we propose CipherDM, the first novel, versatile and universal framework applying MPC technology to DMs for secure sampling, which can be widely implemented on multiple DM based tasks. We thoroughly analyze sampling latency breakdown, find time-consuming parts and design corresponding secure MPC protocols for computing nonlinear activations including SoftMax, SiLU and Mish. CipherDM is evaluated on popular architectures (DDPM, DDIM) using MNIST dataset and on SD deployed by diffusers. Compared to direct implementation on SPU, our approach improves running time by approximately 1.084times sim 2.328times, and reduces communication costs by approximately 1.212times sim 1.791times.
{"title":"CipherDM: Secure Three-Party Inference for Diffusion Model Sampling","authors":"Xin Zhao, Xiaojun Chen, Xudong Chen, He Li, Tingyu Fan, Zhendong Zhao","doi":"arxiv-2409.05414","DOIUrl":"https://doi.org/arxiv-2409.05414","url":null,"abstract":"Diffusion Models (DMs) achieve state-of-the-art synthesis results in image\u0000generation and have been applied to various fields. However, DMs sometimes\u0000seriously violate user privacy during usage, making the protection of privacy\u0000an urgent issue. Using traditional privacy computing schemes like Secure\u0000Multi-Party Computation (MPC) directly in DMs faces significant computation and\u0000communication challenges. To address these issues, we propose CipherDM, the\u0000first novel, versatile and universal framework applying MPC technology to DMs\u0000for secure sampling, which can be widely implemented on multiple DM based\u0000tasks. We thoroughly analyze sampling latency breakdown, find time-consuming\u0000parts and design corresponding secure MPC protocols for computing nonlinear\u0000activations including SoftMax, SiLU and Mish. CipherDM is evaluated on popular\u0000architectures (DDPM, DDIM) using MNIST dataset and on SD deployed by diffusers.\u0000Compared to direct implementation on SPU, our approach improves running time by\u0000approximately 1.084times sim 2.328times, and reduces communication costs by\u0000approximately 1.212times sim 1.791times.","PeriodicalId":501332,"journal":{"name":"arXiv - CS - Cryptography and Security","volume":"5 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142201454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The National Institute of Standards and Technology (NIST) has finalized the selection of post-quantum cryptographic (PQC) algorithms for use in the era of quantum computing. Despite their integration into TLS protocol for key establishment and signature generation, there is limited study on profiling these newly standardized algorithms in resource-constrained communication systems. In this work, we integrate PQC into both TLS servers and clients built upon embedded systems. Additionally, we compare the performance overhead of PQC pairs to currently used non-PQC schemes.
{"title":"Evaluating Post-Quantum Cryptography on Embedded Systems: A Performance Analysis","authors":"Ben Dong, Qian Wang","doi":"arxiv-2409.05298","DOIUrl":"https://doi.org/arxiv-2409.05298","url":null,"abstract":"The National Institute of Standards and Technology (NIST) has finalized the\u0000selection of post-quantum cryptographic (PQC) algorithms for use in the era of\u0000quantum computing. Despite their integration into TLS protocol for key\u0000establishment and signature generation, there is limited study on profiling\u0000these newly standardized algorithms in resource-constrained communication\u0000systems. In this work, we integrate PQC into both TLS servers and clients built\u0000upon embedded systems. Additionally, we compare the performance overhead of PQC\u0000pairs to currently used non-PQC schemes.","PeriodicalId":501332,"journal":{"name":"arXiv - CS - Cryptography and Security","volume":"7 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142201450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural compression has the potential to revolutionize lossy image compression. Based on generative models, recent schemes achieve unprecedented compression rates at high perceptual quality but compromise semantic fidelity. Details of decompressed images may appear optically flawless but semantically different from the originals, making compression errors difficult or impossible to detect. We explore the problem space and propose a provisional taxonomy of miscompressions. It defines three types of 'what happens' and has a binary 'high impact' flag indicating miscompressions that alter symbols. We discuss how the taxonomy can facilitate risk communication and research into mitigations.
{"title":"A Taxonomy of Miscompressions: Preparing Image Forensics for Neural Compression","authors":"Nora Hofer, Rainer Böhme","doi":"arxiv-2409.05490","DOIUrl":"https://doi.org/arxiv-2409.05490","url":null,"abstract":"Neural compression has the potential to revolutionize lossy image\u0000compression. Based on generative models, recent schemes achieve unprecedented\u0000compression rates at high perceptual quality but compromise semantic fidelity.\u0000Details of decompressed images may appear optically flawless but semantically\u0000different from the originals, making compression errors difficult or impossible\u0000to detect. We explore the problem space and propose a provisional taxonomy of\u0000miscompressions. It defines three types of 'what happens' and has a binary\u0000'high impact' flag indicating miscompressions that alter symbols. We discuss\u0000how the taxonomy can facilitate risk communication and research into\u0000mitigations.","PeriodicalId":501332,"journal":{"name":"arXiv - CS - Cryptography and Security","volume":"41 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142201449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Privacy-preserving neural network (NN) inference can be achieved by utilizing homomorphic encryption (HE), which allows computations to be directly carried out over ciphertexts. Popular HE schemes are built over large polynomial rings. To allow simultaneous multiplications in the convolutional (Conv) and fully-connected (FC) layers, multiple input data are mapped to coefficients in the same polynomial, so are the weights of NNs. However, ciphertext rotations are necessary to compute the sums of products and/or incorporate the outputs of different channels into the same polynomials. Ciphertext rotations have much higher complexity than ciphertext multiplications and contribute to the majority of the latency of HE-evaluated Conv and FC layers. This paper proposes a novel reformulated server-client joint computation procedure and a new filter coefficient packing scheme to eliminate ciphertext rotations without affecting the security of the HE scheme. Our proposed scheme also leads to substantial reductions on the number of coefficient multiplications needed and the communication cost between the server and client. For various plain-20 classifiers over the CIFAR-10/100 datasets, our design reduces the running time of the Conv and FC layers by 15.5% and the communication cost between client and server by more than 50%, compared to the best prior design.
利用同构加密(HE)可以实现保护隐私的神经网络(NN)推理,它允许直接在密码文本上进行计算。为了允许卷积层(Conv)和全连接层(FC)同时进行乘法运算,多个输入数据被映射为同一个多项式的系数,神经网络的权重也是如此。但是,为了计算乘积之和和/或将不同通道的输出纳入相同的多项式,需要对密文进行旋转。密文旋转的复杂度远高于密文乘法,是造成 HE 评估 Conv 和 FC 层延迟的主要原因。本文提出了一种新颖的重构服务器-客户端联合计算程序和一种新的滤波器高效打包方案,在不影响 HE 方案安全性的情况下消除了密文旋转。我们提出的方案还大大减少了所需的系数乘法次数以及服务器和客户端之间的通信成本。对于 CIFAR-10/100 数据集上的各种普通 20 分类器,与之前的最佳设计相比,我们的设计将 Conv 层和 FC 层的运行时间减少了 15.5%,将客户端与服务器之间的通信成本减少了 50%以上。
{"title":"Efficient Homomorphically Encrypted Convolutional Neural Network Without Rotation","authors":"Sajjad Akherati, Xinmiao Zhang","doi":"arxiv-2409.05205","DOIUrl":"https://doi.org/arxiv-2409.05205","url":null,"abstract":"Privacy-preserving neural network (NN) inference can be achieved by utilizing\u0000homomorphic encryption (HE), which allows computations to be directly carried\u0000out over ciphertexts. Popular HE schemes are built over large polynomial rings.\u0000To allow simultaneous multiplications in the convolutional (Conv) and\u0000fully-connected (FC) layers, multiple input data are mapped to coefficients in\u0000the same polynomial, so are the weights of NNs. However, ciphertext rotations\u0000are necessary to compute the sums of products and/or incorporate the outputs of\u0000different channels into the same polynomials. Ciphertext rotations have much\u0000higher complexity than ciphertext multiplications and contribute to the\u0000majority of the latency of HE-evaluated Conv and FC layers. This paper proposes\u0000a novel reformulated server-client joint computation procedure and a new filter\u0000coefficient packing scheme to eliminate ciphertext rotations without affecting\u0000the security of the HE scheme. Our proposed scheme also leads to substantial\u0000reductions on the number of coefficient multiplications needed and the\u0000communication cost between the server and client. For various plain-20\u0000classifiers over the CIFAR-10/100 datasets, our design reduces the running time\u0000of the Conv and FC layers by 15.5% and the communication cost between client\u0000and server by more than 50%, compared to the best prior design.","PeriodicalId":501332,"journal":{"name":"arXiv - CS - Cryptography and Security","volume":"11 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142201455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Image steganography is a technique to conceal secret messages within digital images. Steganalysis, on the contrary, aims to detect the presence of secret messages within images. Recently, deep-learning-based steganalysis methods have achieved excellent detection performance. As a countermeasure, adversarial steganography has garnered considerable attention due to its ability to effectively deceive deep-learning-based steganalysis. However, steganalysts often employ unknown steganalytic models for detection. Therefore, the ability of adversarial steganography to deceive non-target steganalytic models, known as transferability, becomes especially important. Nevertheless, existing adversarial steganographic methods do not consider how to enhance transferability. To address this issue, we propose a novel adversarial steganographic scheme named Natias. Specifically, we first attribute the output of a steganalytic model to each neuron in the target middle layer to identify critical features. Next, we corrupt these critical features that may be adopted by diverse steganalytic models. Consequently, it can promote the transferability of adversarial steganography. Our proposed method can be seamlessly integrated with existing adversarial steganography frameworks. Thorough experimental analyses affirm that our proposed technique possesses improved transferability when contrasted with former approaches, and it attains heightened security in retraining scenarios.
{"title":"Natias: Neuron Attribution based Transferable Image Adversarial Steganography","authors":"Zexin Fan, Kejiang Chen, Kai Zeng, Jiansong Zhang, Weiming Zhang, Nenghai Yu","doi":"arxiv-2409.04968","DOIUrl":"https://doi.org/arxiv-2409.04968","url":null,"abstract":"Image steganography is a technique to conceal secret messages within digital\u0000images. Steganalysis, on the contrary, aims to detect the presence of secret\u0000messages within images. Recently, deep-learning-based steganalysis methods have\u0000achieved excellent detection performance. As a countermeasure, adversarial\u0000steganography has garnered considerable attention due to its ability to\u0000effectively deceive deep-learning-based steganalysis. However, steganalysts\u0000often employ unknown steganalytic models for detection. Therefore, the ability\u0000of adversarial steganography to deceive non-target steganalytic models, known\u0000as transferability, becomes especially important. Nevertheless, existing\u0000adversarial steganographic methods do not consider how to enhance\u0000transferability. To address this issue, we propose a novel adversarial\u0000steganographic scheme named Natias. Specifically, we first attribute the output\u0000of a steganalytic model to each neuron in the target middle layer to identify\u0000critical features. Next, we corrupt these critical features that may be adopted\u0000by diverse steganalytic models. Consequently, it can promote the\u0000transferability of adversarial steganography. Our proposed method can be\u0000seamlessly integrated with existing adversarial steganography frameworks.\u0000Thorough experimental analyses affirm that our proposed technique possesses\u0000improved transferability when contrasted with former approaches, and it attains\u0000heightened security in retraining scenarios.","PeriodicalId":501332,"journal":{"name":"arXiv - CS - Cryptography and Security","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142201613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}