Traditionally, a CP-ABE scheme includes 4 basic algorithms: Setup, KeyGen, Encrypt, and Decrypt as Figure 1(a). If the data owner wants to change the access policy of data, he/she should download, re-encrypt, and then re-upload a new ciphertext. NIPU-CP-ABE consists of 7 polynomial time algorithms as Figure 1(b): Setup and KeyGen are executed by a trusted center; UpdateKeyGen, Encrypt and PolicyUpdate are executed by the data owner; Decrypt is executed by the data receivers; CiphertextUpdate is executed by a semi-trusted storage platform. If the data owner wants to change the data access policy, he/she can directly generate a public update component (PUC). Then the data access policy can be changed based on PUC and existing ciphertext. That is to say, the ciphertext under a new access policy can be synthesized by the ciphertext under an old policy and a sectional ciphertext under the new access policy. We can simply express the update as: Old CT + PUC → New CT Or say, we have following equivalence relation for policy update: Decrypt + Encrypt ⇔ PolicyUpdate + CiphertextUpdate Obviously, this bring an advantage that the communication times to change data access policy becomes half of traditional reencryption.
{"title":"Non-Interactive Cryptographic Access Control for Secure Outsourced Storage","authors":"Wei Yuan","doi":"10.1145/3411495.3421367","DOIUrl":"https://doi.org/10.1145/3411495.3421367","url":null,"abstract":"Traditionally, a CP-ABE scheme includes 4 basic algorithms: Setup, KeyGen, Encrypt, and Decrypt as Figure 1(a). If the data owner wants to change the access policy of data, he/she should download, re-encrypt, and then re-upload a new ciphertext. NIPU-CP-ABE consists of 7 polynomial time algorithms as Figure 1(b): Setup and KeyGen are executed by a trusted center; UpdateKeyGen, Encrypt and PolicyUpdate are executed by the data owner; Decrypt is executed by the data receivers; CiphertextUpdate is executed by a semi-trusted storage platform. If the data owner wants to change the data access policy, he/she can directly generate a public update component (PUC). Then the data access policy can be changed based on PUC and existing ciphertext. That is to say, the ciphertext under a new access policy can be synthesized by the ciphertext under an old policy and a sectional ciphertext under the new access policy. We can simply express the update as: Old CT + PUC → New CT Or say, we have following equivalence relation for policy update: Decrypt + Encrypt ⇔ PolicyUpdate + CiphertextUpdate Obviously, this bring an advantage that the communication times to change data access policy becomes half of traditional reencryption.","PeriodicalId":125943,"journal":{"name":"Proceedings of the 2020 ACM SIGSAC Conference on Cloud Computing Security Workshop","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125160124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The recent years have brought an influx of privacy conscious applications, that enable strong security guarantees for end-users via end-to-end or client-side encryption. Unfortunately, this application paradigm is not easily transferable to web-based cloud applications. The reason for this lies within adversary's enhanced control over client-side computing through application provided JavaScript. In this paper, we propose CryptoMembranes - a set of native client-side components that allow the development of web applications which provide a robust isolation layer between the client-side encrypted user data and the potentially untrusted JavaScript, while maintaining full interoperability with current client-side development practices. In addition, to enable a realistic transition phase, we show how CryptoMembranes can be realized for currently existing web browsers via a standard browser extension.
{"title":"Towards Enabling Secure Web-Based Cloud Services using Client-Side Encryption","authors":"Martin Johns, Alexandra Dirksen","doi":"10.1145/3411495.3421364","DOIUrl":"https://doi.org/10.1145/3411495.3421364","url":null,"abstract":"The recent years have brought an influx of privacy conscious applications, that enable strong security guarantees for end-users via end-to-end or client-side encryption. Unfortunately, this application paradigm is not easily transferable to web-based cloud applications. The reason for this lies within adversary's enhanced control over client-side computing through application provided JavaScript. In this paper, we propose CryptoMembranes - a set of native client-side components that allow the development of web applications which provide a robust isolation layer between the client-side encrypted user data and the potentially untrusted JavaScript, while maintaining full interoperability with current client-side development practices. In addition, to enable a realistic transition phase, we show how CryptoMembranes can be realized for currently existing web browsers via a standard browser extension.","PeriodicalId":125943,"journal":{"name":"Proceedings of the 2020 ACM SIGSAC Conference on Cloud Computing Security Workshop","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123678923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Verifpal is a new automated modeling framework and verifier for cryptographic protocols, optimized with heuristics for common-case protocol specifications, that aims to work better for real-world practitioners, students and engineers without sacrificing comprehensive formal verification features. In order to achieve this, Verifpal introduces a new, intuitive language for modeling protocols that is easier to write and understand than the languages employed by existing tools. Its formal verification paradigm is also designed explicitly to provide protocol modeling that avoids user error. Verifpal is able to model protocols under an active attacker with unbounded sessions and fresh values, and supports queries for advanced security properties such as forward secrecy or key compromise impersonation. Furthermore, Verifpal's semantics have been formalized within the Coq theorem prover, and Verifpal models can be automatically translated into Coq as well as into ProVerif models for further verification. Verifpal has already been used to verify security properties for Signal, Scuttlebutt, TLS 1.3 as well as the first formal model for the DP-3T pandemic-tracing protocol, which we present in this work. Through Verifpal, we show that advanced verification with formalized semantics and sound logic can exist without any expense towards the convenience of real-world practitioners.
{"title":"Verifpal: Cryptographic Protocol Analysis for the Real World","authors":"Nadim Kobeissi, Georgio Nicolas, Mukesh Tiwari","doi":"10.1145/3411495.3421365","DOIUrl":"https://doi.org/10.1145/3411495.3421365","url":null,"abstract":"Verifpal is a new automated modeling framework and verifier for cryptographic protocols, optimized with heuristics for common-case protocol specifications, that aims to work better for real-world practitioners, students and engineers without sacrificing comprehensive formal verification features. In order to achieve this, Verifpal introduces a new, intuitive language for modeling protocols that is easier to write and understand than the languages employed by existing tools. Its formal verification paradigm is also designed explicitly to provide protocol modeling that avoids user error. Verifpal is able to model protocols under an active attacker with unbounded sessions and fresh values, and supports queries for advanced security properties such as forward secrecy or key compromise impersonation. Furthermore, Verifpal's semantics have been formalized within the Coq theorem prover, and Verifpal models can be automatically translated into Coq as well as into ProVerif models for further verification. Verifpal has already been used to verify security properties for Signal, Scuttlebutt, TLS 1.3 as well as the first formal model for the DP-3T pandemic-tracing protocol, which we present in this work. Through Verifpal, we show that advanced verification with formalized semantics and sound logic can exist without any expense towards the convenience of real-world practitioners.","PeriodicalId":125943,"journal":{"name":"Proceedings of the 2020 ACM SIGSAC Conference on Cloud Computing Security Workshop","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128198284","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neural networks are increasingly important in the development of Network Intrusion Detection Systems (NIDS), as they have the potential to achieve high detection accuracy while requiring limited feature engineering. Deep learning-based detectors can be however vulnerable to adversarial examples, by which attackers that may be oblivious to the precise mechanics of the targeted NIDS add subtle perturbations to malicious traffic features, with the aim of evading detection and disrupting critical systems in a cost-effective manner. Defending against such adversarial attacks is therefore of high importance, but requires to address daunting challenges. In this paper, we introduce Tiki-Taka, a general framework for (i) assessing the robustness of state-of-the-art deep learning-based NIDS against adversarial manipulations, and which (ii) incorporates our proposed defense mechanisms to increase the NIDS' resistance to attacks employing such evasion techniques. Specifically, we select five different cutting-edge adversarial attack mechanisms to subvert three popular malicious traffic detectors that employ neural networks. We experiment with a publicly available dataset and consider both one-to-all and one-to-one classification scenarios, i.e., discriminating illicit vs benign traffic and respectively identifying specific types of anomalous traffic among many observed. The results obtained reveal that, under realistic constraints, attackers can evade NIDS with up to 35.7% success rates, by only altering time-based features of the traffic generated. To counteract these weaknesses, we propose three defense mechanisms, namely: model voting ensembling, ensembling adversarial training, and query detection. To the best of our knowledge, our work is the first to propose defenses against adversarial attacks targeting NIDS. We demonstrate that when employing the proposed methods, intrusion detection rates can be improved to nearly 100% against most types of malicious traffic, and attacks with potentially catastrophic consequences (e.g., botnet) can be thwarted. This confirms the effectiveness of our solutions and makes the case for their adoption when designing robust and reliable deep anomaly detectors.
{"title":"Tiki-Taka: Attacking and Defending Deep Learning-based Intrusion Detection Systems","authors":"Chaoyun Zhang, X. Costa, P. Patras","doi":"10.1145/3411495.3421359","DOIUrl":"https://doi.org/10.1145/3411495.3421359","url":null,"abstract":"Neural networks are increasingly important in the development of Network Intrusion Detection Systems (NIDS), as they have the potential to achieve high detection accuracy while requiring limited feature engineering. Deep learning-based detectors can be however vulnerable to adversarial examples, by which attackers that may be oblivious to the precise mechanics of the targeted NIDS add subtle perturbations to malicious traffic features, with the aim of evading detection and disrupting critical systems in a cost-effective manner. Defending against such adversarial attacks is therefore of high importance, but requires to address daunting challenges. In this paper, we introduce Tiki-Taka, a general framework for (i) assessing the robustness of state-of-the-art deep learning-based NIDS against adversarial manipulations, and which (ii) incorporates our proposed defense mechanisms to increase the NIDS' resistance to attacks employing such evasion techniques. Specifically, we select five different cutting-edge adversarial attack mechanisms to subvert three popular malicious traffic detectors that employ neural networks. We experiment with a publicly available dataset and consider both one-to-all and one-to-one classification scenarios, i.e., discriminating illicit vs benign traffic and respectively identifying specific types of anomalous traffic among many observed. The results obtained reveal that, under realistic constraints, attackers can evade NIDS with up to 35.7% success rates, by only altering time-based features of the traffic generated. To counteract these weaknesses, we propose three defense mechanisms, namely: model voting ensembling, ensembling adversarial training, and query detection. To the best of our knowledge, our work is the first to propose defenses against adversarial attacks targeting NIDS. We demonstrate that when employing the proposed methods, intrusion detection rates can be improved to nearly 100% against most types of malicious traffic, and attacks with potentially catastrophic consequences (e.g., botnet) can be thwarted. This confirms the effectiveness of our solutions and makes the case for their adoption when designing robust and reliable deep anomaly detectors.","PeriodicalId":125943,"journal":{"name":"Proceedings of the 2020 ACM SIGSAC Conference on Cloud Computing Security Workshop","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128520720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Naufal Suryanto, Hyoeun Kang, Yongsu Kim, Youngyeo Yun, Harashta Tatimma Larasati, Howon Kim
Current adversarial attack methods in black-box settings mainly: (1) rely on transferability approach which requires a substitute model, hence inefficient; or (2) employ a large number of queries for crafting their adversarial examples, hence very likely to be detected and responded by the target system (e.g., AI service provider) due to its high traffic volume. In this paper, we present a black-box adversarial attack based on Multi-Group Particle Swarm Optimization with Random Redistribution (MGRR-PSO) which yields a very high success rate while maintaining a low number of query by launching the attack in a distributed manner. Attacks are executed from multiple nodes, disseminating queries among the nodes, hence reducing the possibility of being recognized by the target system while also increasing scalability. Furthermore, we propose to efficiently remove excessive perturbation (i.e., perturbation pruning) by utilizing again the MGRR-PSO. Overall, we perform five different experiments: comparing our attack's performance with existing algorithms, testing in high-dimensional space using ImageNet dataset, examining our hyperparameters, and testing on real digital attack to Google Cloud Vision. Our attack proves to obtain a 100% success rate for both untargeted and targeted attack on MNIST and CIFAR-10 datasets and able to successfully fool Google Cloud Vision as a proof of the real digital attack with relatively low queries.
{"title":"ABSTRACT: Together We Can Fool Them: A Distributed Black-Box Adversarial Attack Based on Multi-Group Particle Swarm Optimization","authors":"Naufal Suryanto, Hyoeun Kang, Yongsu Kim, Youngyeo Yun, Harashta Tatimma Larasati, Howon Kim","doi":"10.1145/3411495.3421368","DOIUrl":"https://doi.org/10.1145/3411495.3421368","url":null,"abstract":"Current adversarial attack methods in black-box settings mainly: (1) rely on transferability approach which requires a substitute model, hence inefficient; or (2) employ a large number of queries for crafting their adversarial examples, hence very likely to be detected and responded by the target system (e.g., AI service provider) due to its high traffic volume. In this paper, we present a black-box adversarial attack based on Multi-Group Particle Swarm Optimization with Random Redistribution (MGRR-PSO) which yields a very high success rate while maintaining a low number of query by launching the attack in a distributed manner. Attacks are executed from multiple nodes, disseminating queries among the nodes, hence reducing the possibility of being recognized by the target system while also increasing scalability. Furthermore, we propose to efficiently remove excessive perturbation (i.e., perturbation pruning) by utilizing again the MGRR-PSO. Overall, we perform five different experiments: comparing our attack's performance with existing algorithms, testing in high-dimensional space using ImageNet dataset, examining our hyperparameters, and testing on real digital attack to Google Cloud Vision. Our attack proves to obtain a 100% success rate for both untargeted and targeted attack on MNIST and CIFAR-10 datasets and able to successfully fool Google Cloud Vision as a proof of the real digital attack with relatively low queries.","PeriodicalId":125943,"journal":{"name":"Proceedings of the 2020 ACM SIGSAC Conference on Cloud Computing Security Workshop","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126236866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In recent years, deep neural networks (DNN) have become an important type of intellectual property due to their high performance on various classification tasks. As a result, DNN stealing attacks have emerged. Many attack surfaces have been exploited, among which cache timing side-channel attacks are hugely problematic because they do not need physical probing or direct interaction with the victim to estimate the DNN model. However, existing cache-side-channel-based DNN reverse engineering attacks rely on analyzing the binary code of the DNN library that must be shared between the attacker and the victim in the main memory. In reality, the DNN library code is often inaccessible because 1) the code is proprietary, or 2) memory sharing has been disabled by the operating system. In our work, we propose GANRED, an attack approach based on the generative adversarial nets (GAN) framework which utilizes cache timing side-channel information to accurately recover the structure of DNNs without memory sharing or code access. The benefit of GANRED is four-fold. 1) There is no need for DNN library code analysis. 2) No shared main memory segment between the victim and the attacker is needed. 3) Our attack locates the exact structure of the victim model, unlike existing attacks which only narrow down the structure search space. 4) Our attack efficiently scales to deeper DNNs, exhibiting only linear growth in the number of layers in the victim DNN.
{"title":"GANRED: GAN-based Reverse Engineering of DNNs via Cache Side-Channel","authors":"Yuntao Liu, Ankur Srivastava","doi":"10.1145/3411495.3421356","DOIUrl":"https://doi.org/10.1145/3411495.3421356","url":null,"abstract":"In recent years, deep neural networks (DNN) have become an important type of intellectual property due to their high performance on various classification tasks. As a result, DNN stealing attacks have emerged. Many attack surfaces have been exploited, among which cache timing side-channel attacks are hugely problematic because they do not need physical probing or direct interaction with the victim to estimate the DNN model. However, existing cache-side-channel-based DNN reverse engineering attacks rely on analyzing the binary code of the DNN library that must be shared between the attacker and the victim in the main memory. In reality, the DNN library code is often inaccessible because 1) the code is proprietary, or 2) memory sharing has been disabled by the operating system. In our work, we propose GANRED, an attack approach based on the generative adversarial nets (GAN) framework which utilizes cache timing side-channel information to accurately recover the structure of DNNs without memory sharing or code access. The benefit of GANRED is four-fold. 1) There is no need for DNN library code analysis. 2) No shared main memory segment between the victim and the attacker is needed. 3) Our attack locates the exact structure of the victim model, unlike existing attacks which only narrow down the structure search space. 4) Our attack efficiently scales to deeper DNNs, exhibiting only linear growth in the number of layers in the victim DNN.","PeriodicalId":125943,"journal":{"name":"Proceedings of the 2020 ACM SIGSAC Conference on Cloud Computing Security Workshop","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121672300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Shringarputale, P. Mcdaniel, Kevin R. B. Butler, T. L. Porta
Public clouds are inherently multi-tenant: applications deployed by different parties (including malicious ones) may reside on the same physical machines and share various hardware resources. With the introduction of newer hypervisors, containerization frameworks like Docker, and managed/orchestrated clusters using systems like Kubernetes, cloud providers downplay the feasibility of co-tenant attacks by marketing a belief that applications do not operate on shared hardware. In this paper, we challenge the conventional wisdom that attackers cannot confirm co-residency with a victim application from inside state-of-the-art containers running on virtual machines. We analyze the degree of vulnerability present in containers running on various systems including within a broad range of commercially utilized orchestrators. Our results show that on commercial cloud environments including AWS and Azure, we can obtain over 90% success rates for co-residency detection using real-life workloads. Our investigation confirms that co-residency attacks are a significant concern on containers running on modern orchestration systems.
{"title":"Co-residency Attacks on Containers are Real","authors":"S. Shringarputale, P. Mcdaniel, Kevin R. B. Butler, T. L. Porta","doi":"10.1145/3411495.3421357","DOIUrl":"https://doi.org/10.1145/3411495.3421357","url":null,"abstract":"Public clouds are inherently multi-tenant: applications deployed by different parties (including malicious ones) may reside on the same physical machines and share various hardware resources. With the introduction of newer hypervisors, containerization frameworks like Docker, and managed/orchestrated clusters using systems like Kubernetes, cloud providers downplay the feasibility of co-tenant attacks by marketing a belief that applications do not operate on shared hardware. In this paper, we challenge the conventional wisdom that attackers cannot confirm co-residency with a victim application from inside state-of-the-art containers running on virtual machines. We analyze the degree of vulnerability present in containers running on various systems including within a broad range of commercially utilized orchestrators. Our results show that on commercial cloud environments including AWS and Azure, we can obtain over 90% success rates for co-residency detection using real-life workloads. Our investigation confirms that co-residency attacks are a significant concern on containers running on modern orchestration systems.","PeriodicalId":125943,"journal":{"name":"Proceedings of the 2020 ACM SIGSAC Conference on Cloud Computing Security Workshop","volume":"21 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133003730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
String search finds occurrences of patterns in a larger text. This general problem occurs in various application scenarios, f.e. Internet search, text processing, DNA analysis, etc. Using somewhat homomorphic encryption with SIMD packing, we provide an efficient string search protocol that allows to perform a private search in outsourced data with minimal preprocessing. At the base of the string search protocol lies a randomized homomorphic equality circuit whose depth is independent of the pattern length. This circuit not only improves the performance but also increases the practicality of our protocol as it requires the same set of encryption parameters for a wide range of patterns of different lengths. This constant depth algorithm is about 12 times faster than the prior work. It takes about 5 minutes on an average laptop to find the positions of a string with at most 50 UTF-32 characters in a text with 1000 characters. In addition, we provide a method that compresses the search results, thus reducing the communication cost of the protocol. For example, the communication complexity for searching a string with 50 characters in a text of length 10000 is about 347 KB and 13.9 MB for a text with 1000000 characters.
{"title":"Homomorphic String Search with Constant Multiplicative Depth","authors":"Charlotte Bonte, Ilia Iliashenko","doi":"10.1145/3411495.3421361","DOIUrl":"https://doi.org/10.1145/3411495.3421361","url":null,"abstract":"String search finds occurrences of patterns in a larger text. This general problem occurs in various application scenarios, f.e. Internet search, text processing, DNA analysis, etc. Using somewhat homomorphic encryption with SIMD packing, we provide an efficient string search protocol that allows to perform a private search in outsourced data with minimal preprocessing. At the base of the string search protocol lies a randomized homomorphic equality circuit whose depth is independent of the pattern length. This circuit not only improves the performance but also increases the practicality of our protocol as it requires the same set of encryption parameters for a wide range of patterns of different lengths. This constant depth algorithm is about 12 times faster than the prior work. It takes about 5 minutes on an average laptop to find the positions of a string with at most 50 UTF-32 characters in a text with 1000 characters. In addition, we provide a method that compresses the search results, thus reducing the communication cost of the protocol. For example, the communication complexity for searching a string with 50 characters in a text of length 10000 is about 347 KB and 13.9 MB for a text with 1000000 characters.","PeriodicalId":125943,"journal":{"name":"Proceedings of the 2020 ACM SIGSAC Conference on Cloud Computing Security Workshop","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129052701","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
G. D. Crescenzo, B. Coan, L. Bahler, K. Rohloff, Y. Polyakov, D. Cousins
Machine Learning as a Service (aka MLaaS) and Smart Grid as a Service (aka SGaaS) are expected to grow at a significant rate. Just like most cloud services, MLaaS and SGaaS can be subject to a number of attacks. In this paper, we focus on white-box attacks (informally defined as attacks that try to access some or all internal data or computation used by the service program), and black-box attacks (informally defined as attacks only use input-output access to the attacked service program). We consider a participant model including a setup manager, a cloud server, and one or many data producers. The cloud server runs a machine learning classifier trained on a dataset provided by the setup manager and classifies new input data provided by the data producers. Applications include analytics over data received by distributed sensors, such as, for instance, in a typical SGaaS environment. We propose a new security notion of encrypted-input classifier obfuscation as a set of algorithms that, in the above participant and algorithm model, aims to protect the cloud server's classifier program from both white-box and black-box attacks. This notion builds on cryptographic obfuscation of programs [1], cryptographic obfuscation of classifiers [2], and encrypted-input obfuscation of programs [3]. We model classifiers as a pair of programs: a training program that on input a dataset and secret data values, returns classification parameters, and a classification program that on input classification parameters, and a new input data value, returns a classification result. A typical execution goes as follows. During obfuscation generation, the setup manager randomly chooses a key k and sends a k-based obfuscation of the classifier to the cloud server, and sends to the data producers either k or information to generate k-based input data encryptions. During obfuscation evaluation, the data producers send k-based input data encryptions to the cloud server, which evaluates the obfuscated classifier over the encrypted input data. Here, the goal is to protect the confidentiality of the dataset, the secret data, and the classification parameters. One can obtain a general-purpose encrypted-input classifier obfuscator in two steps: 1) transforming a suitable composition of training and classification algorithms into a single boolean circuit; 2) applying to this circuit the result from saying that [3] a modification of Yao's protocol[4] is an encrypted-input obfuscation of gate values in any polynomial-size boolean circuit. This result is of only theoretical relevance. Towards finding a practically efficient obfuscation of specific classifiers, we note that techniques from [3] can be used to produce an obfuscator for decision trees. Moreover, in recent results we have produced an obfuscator for image matching (i.e., matching an input image to a secret image).
{"title":"Securing Classifiers Against Both White-Box and Black-Box Attacks using Encrypted-Input Obfuscation","authors":"G. D. Crescenzo, B. Coan, L. Bahler, K. Rohloff, Y. Polyakov, D. Cousins","doi":"10.1145/3411495.3421369","DOIUrl":"https://doi.org/10.1145/3411495.3421369","url":null,"abstract":"Machine Learning as a Service (aka MLaaS) and Smart Grid as a Service (aka SGaaS) are expected to grow at a significant rate. Just like most cloud services, MLaaS and SGaaS can be subject to a number of attacks. In this paper, we focus on white-box attacks (informally defined as attacks that try to access some or all internal data or computation used by the service program), and black-box attacks (informally defined as attacks only use input-output access to the attacked service program). We consider a participant model including a setup manager, a cloud server, and one or many data producers. The cloud server runs a machine learning classifier trained on a dataset provided by the setup manager and classifies new input data provided by the data producers. Applications include analytics over data received by distributed sensors, such as, for instance, in a typical SGaaS environment. We propose a new security notion of encrypted-input classifier obfuscation as a set of algorithms that, in the above participant and algorithm model, aims to protect the cloud server's classifier program from both white-box and black-box attacks. This notion builds on cryptographic obfuscation of programs [1], cryptographic obfuscation of classifiers [2], and encrypted-input obfuscation of programs [3]. We model classifiers as a pair of programs: a training program that on input a dataset and secret data values, returns classification parameters, and a classification program that on input classification parameters, and a new input data value, returns a classification result. A typical execution goes as follows. During obfuscation generation, the setup manager randomly chooses a key k and sends a k-based obfuscation of the classifier to the cloud server, and sends to the data producers either k or information to generate k-based input data encryptions. During obfuscation evaluation, the data producers send k-based input data encryptions to the cloud server, which evaluates the obfuscated classifier over the encrypted input data. Here, the goal is to protect the confidentiality of the dataset, the secret data, and the classification parameters. One can obtain a general-purpose encrypted-input classifier obfuscator in two steps: 1) transforming a suitable composition of training and classification algorithms into a single boolean circuit; 2) applying to this circuit the result from saying that [3] a modification of Yao's protocol[4] is an encrypted-input obfuscation of gate values in any polynomial-size boolean circuit. This result is of only theoretical relevance. Towards finding a practically efficient obfuscation of specific classifiers, we note that techniques from [3] can be used to produce an obfuscator for decision trees. Moreover, in recent results we have produced an obfuscator for image matching (i.e., matching an input image to a secret image).","PeriodicalId":125943,"journal":{"name":"Proceedings of the 2020 ACM SIGSAC Conference on Cloud Computing Security Workshop","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116079749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Process confinement is a key requirement for workloads in the cloud and in other contexts. Existing process confinement mechanisms on Linux, however, are complex and inflexible because they are implemented using a combination of primitive abstractions (e.g., namespaces, cgroups) and complex security mechanisms (e.g., SELinux, AppArmor) that were designed for purposes beyond basic process confinement. We argue that simple, efficient, and flexible confinement can be better implemented today using eBPF, an emerging technology for safely extending the Linux kernel. We present a proof-of-concept confinement application, bpfbox, that uses less than 2000 lines of kernelspace code and allows for confinement at the userspace function, system call, LSM hook, and kernelspace function boundaries---something that no existing process confinement mechanism can do. Further, it does so using a policy language simple enough to use for ad-hoc confinement purposes. This paper presents the motivation, design, implementation, and benchmarks of bpfbox, including a sample web server confinement policy.
{"title":"bpfbox: Simple Precise Process Confinement with eBPF","authors":"W. Findlay, Anil Somayaji, David Barrera","doi":"10.1145/3411495.3421358","DOIUrl":"https://doi.org/10.1145/3411495.3421358","url":null,"abstract":"Process confinement is a key requirement for workloads in the cloud and in other contexts. Existing process confinement mechanisms on Linux, however, are complex and inflexible because they are implemented using a combination of primitive abstractions (e.g., namespaces, cgroups) and complex security mechanisms (e.g., SELinux, AppArmor) that were designed for purposes beyond basic process confinement. We argue that simple, efficient, and flexible confinement can be better implemented today using eBPF, an emerging technology for safely extending the Linux kernel. We present a proof-of-concept confinement application, bpfbox, that uses less than 2000 lines of kernelspace code and allows for confinement at the userspace function, system call, LSM hook, and kernelspace function boundaries---something that no existing process confinement mechanism can do. Further, it does so using a policy language simple enough to use for ad-hoc confinement purposes. This paper presents the motivation, design, implementation, and benchmarks of bpfbox, including a sample web server confinement policy.","PeriodicalId":125943,"journal":{"name":"Proceedings of the 2020 ACM SIGSAC Conference on Cloud Computing Security Workshop","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125503638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}