Inadvertent exposure of sensitive data is a major concern for potential cloud customers. Much focus has been on other data leakage vectors, such as side channel attacks, while issues of data disposal and assured deletion have not received enough attention to date. However, data that is not properly destroyed may lead to unintended disclosures, in turn, resulting in heavy financial penalties and reputational damage. In non-cloud contexts, issues of incomplete deletion are well understood. To the best of our knowledge, to date, there has been no systematic analysis of assured deletion challenges in public clouds. In this paper, we aim to address this gap by analysing assured deletion requirements for the cloud, identifying cloud features that pose a threat to assured deletion, and describing various assured deletion challenges. Based on this discussion, we identify future challenges for research in this area and propose an initial assured deletion architecture for cloud settings. Altogether, our work offers a systematization of requirements and challenges of assured deletion in the cloud, and a well-founded reference point for future research in developing new solutions to assured deletion.
{"title":"Assured Deletion in the Cloud: Requirements, Challenges and Future Directions","authors":"K. Ramokapane, A. Rashid, J. Such","doi":"10.1145/2996429.2996434","DOIUrl":"https://doi.org/10.1145/2996429.2996434","url":null,"abstract":"Inadvertent exposure of sensitive data is a major concern for potential cloud customers. Much focus has been on other data leakage vectors, such as side channel attacks, while issues of data disposal and assured deletion have not received enough attention to date. However, data that is not properly destroyed may lead to unintended disclosures, in turn, resulting in heavy financial penalties and reputational damage. In non-cloud contexts, issues of incomplete deletion are well understood. To the best of our knowledge, to date, there has been no systematic analysis of assured deletion challenges in public clouds. In this paper, we aim to address this gap by analysing assured deletion requirements for the cloud, identifying cloud features that pose a threat to assured deletion, and describing various assured deletion challenges. Based on this discussion, we identify future challenges for research in this area and propose an initial assured deletion architecture for cloud settings. Altogether, our work offers a systematization of requirements and challenges of assured deletion in the cloud, and a well-founded reference point for future research in developing new solutions to assured deletion.","PeriodicalId":373063,"journal":{"name":"Proceedings of the 2016 ACM on Cloud Computing Security Workshop","volume":"141 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114571176","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the public clouds, an adversary can co-locate his or her virtual machines (VMs) with others on the same physical servers to start an attack against the integrity, confidentiality or availability. The one important factor to decrease the likelihood of this co-location attack is the VMs placement strategy. However, a co-location resistant strategy will compromise the resources optimization of the cloud providers. The tradeoff between security and resources optimization introduces one of the most crucial challenges in the cloud security. In this work we propose a placement strategy allowing the decrease of co-location rate by compromising the VM startup time instead of the optimization of resources. We give a mathematical analysis to quantify the co-location resistance. The proposed strategy is evaluated against the abusing placement locality, where the attack and target VMs are launched simultaneously or within a short time window. Referring to EC2 placement strategy, the best co-location resistant strategy out of the existing public cloud providers strategies, our strategy decreases enormously the co-location attacks with a slight VM startup delay (relatively to the actual VM startup delay in the public cloud providers).
{"title":"Co-location Resistant Strategy with Full Resources Optimization","authors":"Mouhebeddine Berrima, A. K. Nasr, N. B. Rajeb","doi":"10.1145/2996429.2996435","DOIUrl":"https://doi.org/10.1145/2996429.2996435","url":null,"abstract":"In the public clouds, an adversary can co-locate his or her virtual machines (VMs) with others on the same physical servers to start an attack against the integrity, confidentiality or availability. The one important factor to decrease the likelihood of this co-location attack is the VMs placement strategy. However, a co-location resistant strategy will compromise the resources optimization of the cloud providers. The tradeoff between security and resources optimization introduces one of the most crucial challenges in the cloud security. In this work we propose a placement strategy allowing the decrease of co-location rate by compromising the VM startup time instead of the optimization of resources. We give a mathematical analysis to quantify the co-location resistance. The proposed strategy is evaluated against the abusing placement locality, where the attack and target VMs are launched simultaneously or within a short time window. Referring to EC2 placement strategy, the best co-location resistant strategy out of the existing public cloud providers strategies, our strategy decreases enormously the co-location attacks with a slight VM startup delay (relatively to the actual VM startup delay in the public cloud providers).","PeriodicalId":373063,"journal":{"name":"Proceedings of the 2016 ACM on Cloud Computing Security Workshop","volume":"113 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124275705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mathias Payer, S. Mangard, E. Weippl, S. Katzenbeisser, Elli Androulaki, M. Reiter
It is our great pleasure to welcome you to the 8th ACM Cloud Computing Security Workshop (CCSW). Since its inception, CCSW has been a forum for bringing together researchers and practitioners to discuss technological advances bearing on the security of compute clouds, their tenants, and the larger Internet community. This year's workshop continues in this tradition. Submissions were evaluated by a program committee of 28 experts in the field, resulting in the selection of 8 full papers (from 23 submitted) and 2 short papers (from 4 submitted) after a roughly one-month review process and online discussion. In addition, the workshop hosted invited lectures by Dr. Michael Waidner from the Fraunhofer SIT and Technische Universitat Darmstadt, and Mr. Luciano Franceschina from Teralytics.
{"title":"Proceedings of the 2016 ACM on Cloud Computing Security Workshop","authors":"Mathias Payer, S. Mangard, E. Weippl, S. Katzenbeisser, Elli Androulaki, M. Reiter","doi":"10.1145/2996429","DOIUrl":"https://doi.org/10.1145/2996429","url":null,"abstract":"It is our great pleasure to welcome you to the 8th ACM Cloud Computing Security Workshop (CCSW). Since its inception, CCSW has been a forum for bringing together researchers and practitioners to discuss technological advances bearing on the security of compute clouds, their tenants, and the larger Internet community. This year's workshop continues in this tradition. \u0000 \u0000Submissions were evaluated by a program committee of 28 experts in the field, resulting in the selection of 8 full papers (from 23 submitted) and 2 short papers (from 4 submitted) after a roughly one-month review process and online discussion. In addition, the workshop hosted invited lectures by Dr. Michael Waidner from the Fraunhofer SIT and Technische Universitat Darmstadt, and Mr. Luciano Franceschina from Teralytics.","PeriodicalId":373063,"journal":{"name":"Proceedings of the 2016 ACM on Cloud Computing Security Workshop","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126200783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Meryeme Ayache, M. Erradi, Bernd Freisleben, A. Khoumsi
Cloud computing offers most of its services under multi-tenancy environments. To satisfy security requirements among collaborating tenants, each tenant may define a set of access control policies to secure access to shared data. Several cloud solutions make use of XACML to specify such policies. However, existing implementations of XACML perform a brute force search to compare a request to all existing rules in a given XACML policy. This decreases the decision process (i.e., policy evaluation) performance especially for policies with a large number of rules. In this paper, we propose an automata-based approach for an efficient XACML policy evaluation. We implemented our approach in a cloud policy engine called X2Automata. The engine first converts both XACML policies and access requests to automata. Second, it combines the two automata by a synchronous product. Third, it applies an evaluation procedure to the resulting automaton to decide whether an access request is granted or not. To highlight the efficiency of X2Automata, we compare its performance, based on the OpenStack cloud environment, with the XACML implementation named Balana.
{"title":"Towards an Efficient Policy Evaluation Process in Multi-Tenancy Cloud Environments","authors":"Meryeme Ayache, M. Erradi, Bernd Freisleben, A. Khoumsi","doi":"10.1145/2996429.2996431","DOIUrl":"https://doi.org/10.1145/2996429.2996431","url":null,"abstract":"Cloud computing offers most of its services under multi-tenancy environments. To satisfy security requirements among collaborating tenants, each tenant may define a set of access control policies to secure access to shared data. Several cloud solutions make use of XACML to specify such policies. However, existing implementations of XACML perform a brute force search to compare a request to all existing rules in a given XACML policy. This decreases the decision process (i.e., policy evaluation) performance especially for policies with a large number of rules. In this paper, we propose an automata-based approach for an efficient XACML policy evaluation. We implemented our approach in a cloud policy engine called X2Automata. The engine first converts both XACML policies and access requests to automata. Second, it combines the two automata by a synchronous product. Third, it applies an evaluation procedure to the resulting automaton to decide whether an access request is granted or not. To highlight the efficiency of X2Automata, we compare its performance, based on the OpenStack cloud environment, with the XACML implementation named Balana.","PeriodicalId":373063,"journal":{"name":"Proceedings of the 2016 ACM on Cloud Computing Security Workshop","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125294291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dimitrios Vasilopoulos, Melek Önen, Kaoutar Elkhiyaoui, R. Molva
This paper addresses the problem of data retrievability in cloud computing systems performing deduplication to optimize their space savings: While there exist a number of proof of retrievability (PoR) solutions that guarantee storage correctness with cryptographic means, these solutions unfortunately come at odds with the deduplication technology. To reconcile proofs of retrievability with file-based cross-user deduplication, we propose the message-locked PoR approach whereby the PoR effect on duplicate data is identical and depends on the value of the data segment, only. As a proof of concept, we describe two instantiations of existing PoRs and show that the main extension is performed during the setup phase whereby both the keying material and the encoded version of the to-be-outsourced file is computed based on the file itself. We additionally propose a new server-aided message-locked key generation technique that compared with related work offers better security guarantees.
{"title":"Message-Locked Proofs of Retrievability with Secure Deduplication","authors":"Dimitrios Vasilopoulos, Melek Önen, Kaoutar Elkhiyaoui, R. Molva","doi":"10.1145/2996429.2996433","DOIUrl":"https://doi.org/10.1145/2996429.2996433","url":null,"abstract":"This paper addresses the problem of data retrievability in cloud computing systems performing deduplication to optimize their space savings: While there exist a number of proof of retrievability (PoR) solutions that guarantee storage correctness with cryptographic means, these solutions unfortunately come at odds with the deduplication technology. To reconcile proofs of retrievability with file-based cross-user deduplication, we propose the message-locked PoR approach whereby the PoR effect on duplicate data is identical and depends on the value of the data segment, only. As a proof of concept, we describe two instantiations of existing PoRs and show that the main extension is performed during the setup phase whereby both the keying material and the encoded version of the to-be-outsourced file is computed based on the file itself. We additionally propose a new server-aided message-locked key generation technique that compared with related work offers better security guarantees.","PeriodicalId":373063,"journal":{"name":"Proceedings of the 2016 ACM on Cloud Computing Security Workshop","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124305109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Oblivious RAM (ORAM) is a well-established technology to hide data access patterns from an untrusted storage system. Although research in ORAM has been spurred in the last few years with the irruption of cloud computing, it is still unclear whether ORAM is ready for the cloud. As we demonstrate in this short paper, there are still some important hurdles to be overcome. One of those is the standard block-based ORAM interface, which can become a timing side-channel when used as a substrate to implement higher level abstractions such as filesystems, personal storage services, etc., typically found in the cloud. We analyze this form of leakage and discuss some possible solutions to this problem, concluding that thwarting it in an efficient manner calls for further research.
{"title":"Oblivious RAM as a Substrate for Cloud Storage -- The Leakage Challenge Ahead","authors":"M. Sánchez-Artigas","doi":"10.1145/2996429.2996430","DOIUrl":"https://doi.org/10.1145/2996429.2996430","url":null,"abstract":"Oblivious RAM (ORAM) is a well-established technology to hide data access patterns from an untrusted storage system. Although research in ORAM has been spurred in the last few years with the irruption of cloud computing, it is still unclear whether ORAM is ready for the cloud. As we demonstrate in this short paper, there are still some important hurdles to be overcome. One of those is the standard block-based ORAM interface, which can become a timing side-channel when used as a substrate to implement higher level abstractions such as filesystems, personal storage services, etc., typically found in the cloud. We analyze this form of leakage and discuss some possible solutions to this problem, concluding that thwarting it in an efficient manner calls for further research.","PeriodicalId":373063,"journal":{"name":"Proceedings of the 2016 ACM on Cloud Computing Security Workshop","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114193949","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cellular networks are aware of the approximate geographical location of all connected devices 24/7 in order to route calls and network packets. Teralytics is a data analytics company specialized in analyzing this particular dataset: Mobile network data describing the mobility behavior of millions of people. The data's unique nature poses several challenges: First and foremost, due to the sensitivity of the data we must adhere to strict privacy rules and regulations and invest heavily into finding answers to legal and ethical questions about its use. Additionally, the data is generated by a complex system of which we have only incomplete visibility and thus shows anomalies and imprecisions, which must be corrected to produce valid analytical output. Finally, as a business we face the challenge of identifying the intersection between feasibility and commercial value of analytical applications. The talk will explore and showcase the challenges and solutions of a real-world data analytics use case.
{"title":"Data Analytics: Understanding Human Behavior based on Mobile Network Data","authors":"Luciano Franceschina","doi":"10.1145/2996429.2996441","DOIUrl":"https://doi.org/10.1145/2996429.2996441","url":null,"abstract":"Cellular networks are aware of the approximate geographical location of all connected devices 24/7 in order to route calls and network packets. Teralytics is a data analytics company specialized in analyzing this particular dataset: Mobile network data describing the mobility behavior of millions of people. The data's unique nature poses several challenges: First and foremost, due to the sensitivity of the data we must adhere to strict privacy rules and regulations and invest heavily into finding answers to legal and ethical questions about its use. Additionally, the data is generated by a complex system of which we have only incomplete visibility and thus shows anomalies and imprecisions, which must be corrected to produce valid analytical output. Finally, as a business we face the challenge of identifying the intersection between feasibility and commercial value of analytical applications. The talk will explore and showcase the challenges and solutions of a real-world data analytics use case.","PeriodicalId":373063,"journal":{"name":"Proceedings of the 2016 ACM on Cloud Computing Security Workshop","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114323657","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
H. Ritzdorf, Ghassan O. Karame, Claudio Soriente, Srdjan Capkun
Most existing cloud storage providers rely on data deduplication in order to significantly save storage costs by storing duplicate data only once. While the literature has thoroughly analyzed client-side information leakage associated with the use of data deduplication techniques in the cloud, no previous work has analyzed the information leakage associated with access trace information information (e.g., object size and timing) that are available whenever a client uploads a file to a curious cloud provider. In this paper, we address this problem and analyze information leakage associated with data deduplication on a curious storage server. We show that even if the data is encrypted using a key not known by the storage server, the latter can still acquire considerable information about the stored files and even determine which files are stored. We validate our results both analytically and experimentally using a number of real storage datasets.
{"title":"On Information Leakage in Deduplicated Storage Systems","authors":"H. Ritzdorf, Ghassan O. Karame, Claudio Soriente, Srdjan Capkun","doi":"10.1145/2996429.2996432","DOIUrl":"https://doi.org/10.1145/2996429.2996432","url":null,"abstract":"Most existing cloud storage providers rely on data deduplication in order to significantly save storage costs by storing duplicate data only once. While the literature has thoroughly analyzed client-side information leakage associated with the use of data deduplication techniques in the cloud, no previous work has analyzed the information leakage associated with access trace information information (e.g., object size and timing) that are available whenever a client uploads a file to a curious cloud provider. In this paper, we address this problem and analyze information leakage associated with data deduplication on a curious storage server. We show that even if the data is encrypted using a key not known by the storage server, the latter can still acquire considerable information about the stored files and even determine which files are stored. We validate our results both analytically and experimentally using a number of real storage datasets.","PeriodicalId":373063,"journal":{"name":"Proceedings of the 2016 ACM on Cloud Computing Security Workshop","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121447807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Denial of Service (DoS) attacks pose a critical threat to the stability and availability of the Internet. In Distributed DoS (DDoS) attacks multiple attacking agents cooperate in an attempt to cause excessive load in order to disconnect a victim. The frequency and volume of DoS attacks continue to break records, reaching 400Gb/s. Although many defenses were proposed, very few are adopted, due to low effectiveness, high costs and the changes required to integrate them into the existing infrastructure. To improve resilience against DDoS attacks the service providers move their operations to cloud platforms. Unfortunately, even if the cloud applies filtering, rate limiting and deep packet inspection, the attacker can subvert those defenses by distributing the attack among multiple attacking IP addresses and aiming the flood at the victim. In this talk we focus on DDoS attacks which disrupt the availability of a service by depleting the bandwidth or the resources of an operating system or application on the server side. Such attackers typically employ a botnet to generate large traffic volumes. A botnet consists of bots (compromised computers) located in different parts of the Internet. The bots, depending on their privileges on the victim host, send multiple packets either from spoofed or using their real IP addresses. We utilize the cloud platform to implement Stratum Filtering, a novel mechanism aimed at protecting the availability and resilience of the web servers hosted on clouds. Our mechanism is easy to integrate into the cloud platform and does not require changes to the existing infrastructure nor the protected servers. Stratum Filtering facilitates the large IP address blocks allocated to the clouds, distributed availability zones and the support of service migration within the cloud platforms. These advantages offered by clouds enable us to restrict the attacker to a naive strategy where the best possible attack is to simply flood the entire IP address block allocated to the cloud. However, such an attack requires huge volume of traffic exposing malicious sources. In addition, controlling and coordinating a large number of bots that would suffice for disconnecting a cloud is not trivial to accomplish. Stratum Filtering is comprised of three layers, such that each successive layer applies filtering targeted at blocking a different type of attack traffic on network, transport or application layers. The filtering uses the difference in behavior of legitimate clients vs bots, to identify and filter traffic arriving from non-standard clients. To characterize …
{"title":"Stratum Filtering: Cloud-based Detection of Attack Sources","authors":"A. Herzberg, Haya Schulmann, M. Waidner","doi":"10.1145/2996429.2996440","DOIUrl":"https://doi.org/10.1145/2996429.2996440","url":null,"abstract":"Denial of Service (DoS) attacks pose a critical threat to the stability and availability of the Internet. In Distributed DoS (DDoS) attacks multiple attacking agents cooperate in an attempt to cause excessive load in order to disconnect a victim. The frequency and volume of DoS attacks continue to break records, reaching 400Gb/s. Although many defenses were proposed, very few are adopted, due to low effectiveness, high costs and the changes required to integrate them into the existing infrastructure. To improve resilience against DDoS attacks the service providers move their operations to cloud platforms. Unfortunately, even if the cloud applies filtering, rate limiting and deep packet inspection, the attacker can subvert those defenses by distributing the attack among multiple attacking IP addresses and aiming the flood at the victim. In this talk we focus on DDoS attacks which disrupt the availability of a service by depleting the bandwidth or the resources of an operating system or application on the server side. Such attackers typically employ a botnet to generate large traffic volumes. A botnet consists of bots (compromised computers) located in different parts of the Internet. The bots, depending on their privileges on the victim host, send multiple packets either from spoofed or using their real IP addresses. We utilize the cloud platform to implement Stratum Filtering, a novel mechanism aimed at protecting the availability and resilience of the web servers hosted on clouds. Our mechanism is easy to integrate into the cloud platform and does not require changes to the existing infrastructure nor the protected servers. Stratum Filtering facilitates the large IP address blocks allocated to the clouds, distributed availability zones and the support of service migration within the cloud platforms. These advantages offered by clouds enable us to restrict the attacker to a naive strategy where the best possible attack is to simply flood the entire IP address block allocated to the cloud. However, such an attack requires huge volume of traffic exposing malicious sources. In addition, controlling and coordinating a large number of bots that would suffice for disconnecting a cloud is not trivial to accomplish. Stratum Filtering is comprised of three layers, such that each successive layer applies filtering targeted at blocking a different type of attack traffic on network, transport or application layers. The filtering uses the difference in behavior of legitimate clients vs bots, to identify and filter traffic arriving from non-standard clients. To characterize …","PeriodicalId":373063,"journal":{"name":"Proceedings of the 2016 ACM on Cloud Computing Security Workshop","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122712302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Privacy-preserving range queries allow encrypting data while still enabling queries on ciphertexts if their corresponding plaintexts fall within a requested range. This provides a data owner the possibility to outsource data collections to a cloud service provider without sacrificing privacy nor losing functionality of filtering this data. However, existing methods for range queries either leak additional information (like the ordering of the complete data set) or slow down the search process tremendously by requiring to query each ciphertext in the data collection. We present a novel scheme that only leaks the access pattern while supporting amortized poly-logarithmic search time. Our construction is based on the novel idea of enabling the cloud service provider to compare requested range queries. By doing so, the cloud service provider can use the access pattern to speed-up search time for range queries in the future. On the one hand, values that have fallen within a queried range, are stored in an interactively built index for future requests. On the other hand, values that have not been queried do not leak any information to the cloud service provider and stay perfectly secure. In order to show its practicability we have implemented our scheme and give a detailed runtime evaluation.
{"title":"Poly-Logarithmic Range Queries on Encrypted Data with Small Leakage","authors":"Florian Hahn, F. Kerschbaum","doi":"10.1145/2996429.2996437","DOIUrl":"https://doi.org/10.1145/2996429.2996437","url":null,"abstract":"Privacy-preserving range queries allow encrypting data while still enabling queries on ciphertexts if their corresponding plaintexts fall within a requested range. This provides a data owner the possibility to outsource data collections to a cloud service provider without sacrificing privacy nor losing functionality of filtering this data. However, existing methods for range queries either leak additional information (like the ordering of the complete data set) or slow down the search process tremendously by requiring to query each ciphertext in the data collection. We present a novel scheme that only leaks the access pattern while supporting amortized poly-logarithmic search time. Our construction is based on the novel idea of enabling the cloud service provider to compare requested range queries. By doing so, the cloud service provider can use the access pattern to speed-up search time for range queries in the future. On the one hand, values that have fallen within a queried range, are stored in an interactively built index for future requests. On the other hand, values that have not been queried do not leak any information to the cloud service provider and stay perfectly secure. In order to show its practicability we have implemented our scheme and give a detailed runtime evaluation.","PeriodicalId":373063,"journal":{"name":"Proceedings of the 2016 ACM on Cloud Computing Security Workshop","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131404896","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}