Conference on Computer and Communications Security : proceedings of the ... conference on computer and communications security. ACM Conference on Computer and Communications Security最新文献
Philip Marquardt, A. Verma, Henry Carter, Patrick Traynor
Mobile phones are increasingly equipped with a range of highly responsive sensors. From cameras and GPS receivers to three-axis accelerometers, applications running on these devices are able to experience rich interactions with their environment. Unfortunately, some applications may be able to use such sensors to monitor their surroundings in unintended ways. In this paper, we demonstrate that an application with access to accelerometer readings on a modern mobile phone can use such information to recover text entered on a nearby keyboard. Note that unlike previous emanation recovery papers, the accelerometers on such devices sample at near the Nyquist rate, making previous techniques unworkable. Our application instead detects and decodes keystrokes by measuring the relative physical position and distance between each vibration. We then match abstracted words against candidate dictionaries and record word recovery rates as high as 80%. In so doing, we demonstrate the potential to recover significant information from the vicinity of a mobile device without gaining access to resources generally considered to be the most likely sources of leakage (e.g., microphone, camera).
{"title":"(sp)iPhone: decoding vibrations from nearby keyboards using mobile phone accelerometers","authors":"Philip Marquardt, A. Verma, Henry Carter, Patrick Traynor","doi":"10.1145/2046707.2046771","DOIUrl":"https://doi.org/10.1145/2046707.2046771","url":null,"abstract":"Mobile phones are increasingly equipped with a range of highly responsive sensors. From cameras and GPS receivers to three-axis accelerometers, applications running on these devices are able to experience rich interactions with their environment. Unfortunately, some applications may be able to use such sensors to monitor their surroundings in unintended ways. In this paper, we demonstrate that an application with access to accelerometer readings on a modern mobile phone can use such information to recover text entered on a nearby keyboard. Note that unlike previous emanation recovery papers, the accelerometers on such devices sample at near the Nyquist rate, making previous techniques unworkable. Our application instead detects and decodes keystrokes by measuring the relative physical position and distance between each vibration. We then match abstracted words against candidate dictionaries and record word recovery rates as high as 80%. In so doing, we demonstrate the potential to recover significant information from the vicinity of a mobile device without gaining access to resources generally considered to be the most likely sources of leakage (e.g., microphone, camera).","PeriodicalId":72687,"journal":{"name":"Conference on Computer and Communications Security : proceedings of the ... conference on computer and communications security. ACM Conference on Computer and Communications Security","volume":"130 1","pages":"551-562"},"PeriodicalIF":0.0,"publicationDate":"2011-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77975005","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Patrick Stewin, Jean-Pierre Seifert, Collin Mulliner
Malware residing in dedicated isolated hardware containing an auxiliary processor such as present in network, video, and CPU chipsets is an emerging security threat. To attack the host system, this kind of malware uses the direct memory access (DMA) functionality. By utilizing DMA, the host system can be fully compromised bypassing any kind of kernel level protection. Traditional anti-virus software is not capable to detect this kind of malware since the auxiliary systems are completely isolated from the host CPU. In this work we present our novel method that is capable of detecting this kind of malware. To understand the properties of such malware we evaluated a prototype that attacks the host via DMA. Our prototype is executed in the chipset of an x86 architecture. We identified key properties of such malware that are crucial for our detection method. Our detection mechanism is based on monitoring the side effects of rogue DMA usage performed by the malware. We believe that our detection mechanism is general and the first step in the detection of malware in dedicated isolated hardware.
{"title":"Poster: Towards detecting DMA malware","authors":"Patrick Stewin, Jean-Pierre Seifert, Collin Mulliner","doi":"10.1145/2046707.2093511","DOIUrl":"https://doi.org/10.1145/2046707.2093511","url":null,"abstract":"Malware residing in dedicated isolated hardware containing an auxiliary processor such as present in network, video, and CPU chipsets is an emerging security threat. To attack the host system, this kind of malware uses the direct memory access (DMA) functionality. By utilizing DMA, the host system can be fully compromised bypassing any kind of kernel level protection. Traditional anti-virus software is not capable to detect this kind of malware since the auxiliary systems are completely isolated from the host CPU. In this work we present our novel method that is capable of detecting this kind of malware. To understand the properties of such malware we evaluated a prototype that attacks the host via DMA. Our prototype is executed in the chipset of an x86 architecture. We identified key properties of such malware that are crucial for our detection method. Our detection mechanism is based on monitoring the side effects of rogue DMA usage performed by the malware. We believe that our detection mechanism is general and the first step in the detection of malware in dedicated isolated hardware.","PeriodicalId":72687,"journal":{"name":"Conference on Computer and Communications Security : proceedings of the ... conference on computer and communications security. ACM Conference on Computer and Communications Security","volume":"18 1","pages":"857-860"},"PeriodicalIF":0.0,"publicationDate":"2011-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79932890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Web applications divide their state between the client and the server. The frequent and highly dynamic client-server communication that is characteristic of modern web applications leaves them vulnerable to side-channel leaks, even over encrypted connections. We describe a black-box tool for detecting and quantifying the severity of side-channel vulnerabilities by analyzing network traffic over repeated crawls of a web application. By viewing the adversary as a multi-dimensional classifier, we develop a methodology to more thoroughly measure the distinguishably of network traffic for a variety of classification metrics. We evaluate our detection system on several deployed web applications, accounting for proposed client and server-side defenses. Our results illustrate the limitations of entropy measurements used in previous work and show how our new metric based on the Fisher criterion can be used to more robustly reveal side-channels in web applications.
{"title":"Automated black-box detection of side-channel vulnerabilities in web applications","authors":"Peter Chapman, David Evans","doi":"10.1145/2046707.2046737","DOIUrl":"https://doi.org/10.1145/2046707.2046737","url":null,"abstract":"Web applications divide their state between the client and the server. The frequent and highly dynamic client-server communication that is characteristic of modern web applications leaves them vulnerable to side-channel leaks, even over encrypted connections. We describe a black-box tool for detecting and quantifying the severity of side-channel vulnerabilities by analyzing network traffic over repeated crawls of a web application. By viewing the adversary as a multi-dimensional classifier, we develop a methodology to more thoroughly measure the distinguishably of network traffic for a variety of classification metrics. We evaluate our detection system on several deployed web applications, accounting for proposed client and server-side defenses. Our results illustrate the limitations of entropy measurements used in previous work and show how our new metric based on the Fisher criterion can be used to more robustly reveal side-channels in web applications.","PeriodicalId":72687,"journal":{"name":"Conference on Computer and Communications Security : proceedings of the ... conference on computer and communications security. ACM Conference on Computer and Communications Security","volume":"22 1","pages":"263-274"},"PeriodicalIF":0.0,"publicationDate":"2011-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75343104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Online service providers are engaged in constant conflict with miscreants who try to siphon a portion of legitimate traffic to make illicit profits. We study the abuse of "trending" search terms, in which miscreants place links to malware-distributing or ad-filled web sites in web search and Twitter results, by collecting and analyzing measurements over nine months from multiple sources. We devise heuristics to identify ad-filled sites, report on the prevalence of malware and ad-filled sites in trending-term search results, and measure the success in blocking such content. We uncover collusion across offending domains using network analysis, and use regression analysis to conclude that both malware and ad-filled sites thrive on less popular, and less profitable trending terms. We build an economic model informed by our measurements and conclude that ad-filled sites and malware distribution may be economic substitutes. Finally, because our measurement interval spans February 2011, when Google announced changes to its ranking algorithm to root out low-quality sites, we can assess the impact of search-engine intervention on the profits miscreants can achieve.
{"title":"Fashion crimes: trending-term exploitation on the web","authors":"T. Moore, Nektarios Leontiadis, Nicolas Christin","doi":"10.1145/2046707.2046761","DOIUrl":"https://doi.org/10.1145/2046707.2046761","url":null,"abstract":"Online service providers are engaged in constant conflict with miscreants who try to siphon a portion of legitimate traffic to make illicit profits. We study the abuse of \"trending\" search terms, in which miscreants place links to malware-distributing or ad-filled web sites in web search and Twitter results, by collecting and analyzing measurements over nine months from multiple sources. We devise heuristics to identify ad-filled sites, report on the prevalence of malware and ad-filled sites in trending-term search results, and measure the success in blocking such content. We uncover collusion across offending domains using network analysis, and use regression analysis to conclude that both malware and ad-filled sites thrive on less popular, and less profitable trending terms. We build an economic model informed by our measurements and conclude that ad-filled sites and malware distribution may be economic substitutes. Finally, because our measurement interval spans February 2011, when Google announced changes to its ranking algorithm to root out low-quality sites, we can assess the impact of search-engine intervention on the profits miscreants can achieve.","PeriodicalId":72687,"journal":{"name":"Conference on Computer and Communications Security : proceedings of the ... conference on computer and communications security. ACM Conference on Computer and Communications Security","volume":"7 1","pages":"455-466"},"PeriodicalIF":0.0,"publicationDate":"2011-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87568177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We empirically analyzed sanitizer use in a shipping web ap- plication with over 400,000 lines of code and over 23,244 methods, the largest empirical analysis of sanitizer use of which we are aware. Our analysis reveals two novel classes of errors: context-mismatched sanitization and inconsistent multiple sanitization. Both of these arise not because sanitizers are incorrectly implemented, but rather because they are not placed in code correctly. Much of the work on crosssite scripting detection to date has focused on finding missing sanitizers in programs of average size. In large legacy applications, other sanitization issues leading to cross-site scripting emerge. To address these errors, we propose ScriptGard, a system for ASP.NET applications which can detect and repair the incorrect placement of sanitizers. ScriptGard serves both as a testing aid to developers as well as a runtime mitigation technique. While mitigations for cross site scripting attacks have seen intense prior research, we consider both server and browser context, none of them achieve the same degree of precision, and many other mitigation techniques require major changes to server side code or to browsers. Our approach, in contrast, can be incrementally retrofitted to legacy systems with no changes to the source code and no browser changes. With our optimizations, when used for mitigation, ScriptGard incurs virtually no statistically significant overhead.
{"title":"SCRIPTGARD: automatic context-sensitive sanitization for large-scale legacy web applications","authors":"P. Saxena, D. Molnar, B. Livshits","doi":"10.1145/2046707.2046776","DOIUrl":"https://doi.org/10.1145/2046707.2046776","url":null,"abstract":"We empirically analyzed sanitizer use in a shipping web ap- plication with over 400,000 lines of code and over 23,244 methods, the largest empirical analysis of sanitizer use of which we are aware. Our analysis reveals two novel classes of errors: context-mismatched sanitization and inconsistent multiple sanitization. Both of these arise not because sanitizers are incorrectly implemented, but rather because they are not placed in code correctly. Much of the work on crosssite scripting detection to date has focused on finding missing sanitizers in programs of average size. In large legacy applications, other sanitization issues leading to cross-site scripting emerge. To address these errors, we propose ScriptGard, a system for ASP.NET applications which can detect and repair the incorrect placement of sanitizers. ScriptGard serves both as a testing aid to developers as well as a runtime mitigation technique. While mitigations for cross site scripting attacks have seen intense prior research, we consider both server and browser context, none of them achieve the same degree of precision, and many other mitigation techniques require major changes to server side code or to browsers. Our approach, in contrast, can be incrementally retrofitted to legacy systems with no changes to the source code and no browser changes. With our optimizations, when used for mitigation, ScriptGard incurs virtually no statistically significant overhead.","PeriodicalId":72687,"journal":{"name":"Conference on Computer and Communications Security : proceedings of the ... conference on computer and communications security. ACM Conference on Computer and Communications Security","volume":"116 1","pages":"601-614"},"PeriodicalIF":0.0,"publicationDate":"2011-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84928820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
When new malware are discovered, it is important for researchers to analyze and understand them as quickly as possible. This task has been made more difficult in recent years as researchers have seen an increasing use of virtualization-obfuscated malware code. These programs are difficult to comprehend and reverse engineer, since they are resistant to both static and dynamic analysis techniques. Current approaches to dealing with such code first reverse-engineer the byte code interpreter, then use this to work out the logic of the byte code program. This outside-in approach produces good results when the structure of the interpreter is known, but cannot be applied to all cases. This paper proposes a different approach to the problem that focuses on identifying instructions that affect the observable behavior of the obfuscated code. This inside-out approach requires fewer assumptions, and aims to complement existing techniques by broadening the domain of obfuscated programs eligible for automated analysis. Results from a prototype tool on real-world malicious code are encouraging.
{"title":"Deobfuscation of virtualization-obfuscated software: a semantics-based approach","authors":"Kevin Coogan, Gen Lu, S. Debray","doi":"10.1145/2046707.2046739","DOIUrl":"https://doi.org/10.1145/2046707.2046739","url":null,"abstract":"When new malware are discovered, it is important for researchers to analyze and understand them as quickly as possible. This task has been made more difficult in recent years as researchers have seen an increasing use of virtualization-obfuscated malware code. These programs are difficult to comprehend and reverse engineer, since they are resistant to both static and dynamic analysis techniques. Current approaches to dealing with such code first reverse-engineer the byte code interpreter, then use this to work out the logic of the byte code program. This outside-in approach produces good results when the structure of the interpreter is known, but cannot be applied to all cases. This paper proposes a different approach to the problem that focuses on identifying instructions that affect the observable behavior of the obfuscated code. This inside-out approach requires fewer assumptions, and aims to complement existing techniques by broadening the domain of obfuscated programs eligible for automated analysis. Results from a prototype tool on real-world malicious code are encouraging.","PeriodicalId":72687,"journal":{"name":"Conference on Computer and Communications Security : proceedings of the ... conference on computer and communications security. ACM Conference on Computer and Communications Security","volume":"2016 1","pages":"275-284"},"PeriodicalIF":0.0,"publicationDate":"2011-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86464033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Secure computation, e.g. using Yao's garbled circuit protocol, allows two parties to compute arbitrary functions without disclosing their inputs. A profitable application of secure computation is business optimization. It is characterized by a monetary benefit for all participants and a high confidentiality of their respective input data. In most instances the consequences of input disclosure, e.g. loss of bargaining power, outweigh the benefits of collaboration. Therefore these optimizations are currently not performed in industrial practice. Our demo shows such an optimization as a secure computation. The joint economic lot size (JELS) is the optimal order quantity between a buyer and supplier. We implemented Yao's protocol in JavaScript, such that it can be executed using two web browsers. This has the additional benefit that the software can be offered as a service (SaaS) and can be easily integrated with other SaaS offerings, e.g. using mash-up technology.
{"title":"Demo: secure computation in JavaScript","authors":"Axel Schroepfer, F. Kerschbaum","doi":"10.1145/2046707.2093509","DOIUrl":"https://doi.org/10.1145/2046707.2093509","url":null,"abstract":"Secure computation, e.g. using Yao's garbled circuit protocol, allows two parties to compute arbitrary functions without disclosing their inputs. A profitable application of secure computation is business optimization. It is characterized by a monetary benefit for all participants and a high confidentiality of their respective input data. In most instances the consequences of input disclosure, e.g. loss of bargaining power, outweigh the benefits of collaboration. Therefore these optimizations are currently not performed in industrial practice.\u0000 Our demo shows such an optimization as a secure computation. The joint economic lot size (JELS) is the optimal order quantity between a buyer and supplier. We implemented Yao's protocol in JavaScript, such that it can be executed using two web browsers. This has the additional benefit that the software can be offered as a service (SaaS) and can be easily integrated with other SaaS offerings, e.g. using mash-up technology.","PeriodicalId":72687,"journal":{"name":"Conference on Computer and Communications Security : proceedings of the ... conference on computer and communications security. ACM Conference on Computer and Communications Security","volume":"19 1","pages":"849-852"},"PeriodicalIF":0.0,"publicationDate":"2011-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75077344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Search engine optimization (SEO) techniques are often abused to promote websites among search results. This is a practice known as blackhat SEO. In this paper we tackle a newly emerging and especially aggressive class of blackhat SEO, namely search poisoning. Unlike other blackhat SEO techniques, which typically attempt to promote a website's ranking only under a limited set of search keywords relevant to the website's content, search poisoning techniques disregard any term relevance constraint and are employed to poison popular search keywords with the sole purpose of diverting large numbers of users to short-lived traffic-hungry websites for malicious purposes. To accurately detect search poisoning cases, we designed a novel detection system called SURF. SURF runs as a browser component to extract a number of robust (i.e., difficult to evade) detection features from search-then-visit browsing sessions, and is able to accurately classify malicious search user redirections resulted from user clicking on poisoned search results. Our evaluation on real-world search poisoning instances shows that SURF can achieve a detection rate of 99.1% at a false positive rate of 0.9%. Furthermore, we applied SURF to analyze a large dataset of search-related browsing sessions collected over a period of seven months starting in September 2010. Through this long-term measurement study we were able to reveal new trends and interesting patterns related to a great variety of poisoning cases, thus contributing to a better understanding of the prevalence and gravity of the search poisoning problem.
{"title":"SURF: detecting and measuring search poisoning","authors":"Long Lu, R. Perdisci, Wenke Lee","doi":"10.1145/2046707.2046762","DOIUrl":"https://doi.org/10.1145/2046707.2046762","url":null,"abstract":"Search engine optimization (SEO) techniques are often abused to promote websites among search results. This is a practice known as blackhat SEO. In this paper we tackle a newly emerging and especially aggressive class of blackhat SEO, namely search poisoning. Unlike other blackhat SEO techniques, which typically attempt to promote a website's ranking only under a limited set of search keywords relevant to the website's content, search poisoning techniques disregard any term relevance constraint and are employed to poison popular search keywords with the sole purpose of diverting large numbers of users to short-lived traffic-hungry websites for malicious purposes. To accurately detect search poisoning cases, we designed a novel detection system called SURF. SURF runs as a browser component to extract a number of robust (i.e., difficult to evade) detection features from search-then-visit browsing sessions, and is able to accurately classify malicious search user redirections resulted from user clicking on poisoned search results. Our evaluation on real-world search poisoning instances shows that SURF can achieve a detection rate of 99.1% at a false positive rate of 0.9%. Furthermore, we applied SURF to analyze a large dataset of search-related browsing sessions collected over a period of seven months starting in September 2010. Through this long-term measurement study we were able to reveal new trends and interesting patterns related to a great variety of poisoning cases, thus contributing to a better understanding of the prevalence and gravity of the search poisoning problem.","PeriodicalId":72687,"journal":{"name":"Conference on Computer and Communications Security : proceedings of the ... conference on computer and communications security. ACM Conference on Computer and Communications Security","volume":"31 11 1","pages":"467-476"},"PeriodicalIF":0.0,"publicationDate":"2011-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79686362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Information flow analysis is a powerful technique for reasoning about sensitive information that may be exposed during program execution. One promising approach is to adopt a program as a communication channel model and leverage information theoretic metrics to quantify such information flows. However, recent research has shown discrepancies in such metrics: for example, Smith et. al. [5] showed examples wherein using the classical Shannon entropy measure for quantifying information flows may be counter-intuitive. Smith et. al. [5] proposed a vulnerability measure in an attempt to resolve this problem, and this measure was subsequently enhanced by Hamadou et. al. [2] into a beliefvulnerability metric. However, as pointed out by Smith et. al., the vulnerability metric fails to distinguish between certain classes of programs (such as the password checker and the binary search program). In this paper, we propose a simple and intuitive approach to quantify program information leakage as a probability distribution over the residual uncertainty of the high input whose mean, variance and worst case measures offer insights into program vulnerability.
{"title":"Poster: on quantitative information flow metrics","authors":"Ji Zhu, M. Srivatsa","doi":"10.1145/2046707.2093516","DOIUrl":"https://doi.org/10.1145/2046707.2093516","url":null,"abstract":"Information flow analysis is a powerful technique for reasoning about sensitive information that may be exposed during program execution. One promising approach is to adopt a program as a communication channel model and leverage information theoretic metrics to quantify such information flows. However, recent research has shown discrepancies in such metrics: for example, Smith et. al. [5] showed examples wherein using the classical Shannon entropy measure for quantifying information flows may be counter-intuitive. Smith et. al. [5] proposed a vulnerability measure in an attempt to resolve this problem, and this measure was subsequently enhanced by Hamadou et. al. [2] into a beliefvulnerability metric. However, as pointed out by Smith et. al., the vulnerability metric fails to distinguish between certain classes of programs (such as the password checker and the binary search program). In this paper, we propose a simple and intuitive approach to quantify program information leakage as a probability distribution over the residual uncertainty of the high input whose mean, variance and worst case measures offer insights into program vulnerability.","PeriodicalId":72687,"journal":{"name":"Conference on Computer and Communications Security : proceedings of the ... conference on computer and communications security. ACM Conference on Computer and Communications Security","volume":"9 1","pages":"877-880"},"PeriodicalIF":0.0,"publicationDate":"2011-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84238216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Halevi, Danny Harnik, Benny Pinkas, Alexandra Shulman-Peleg
Cloud storage systems are becoming increasingly popular. A promising technology that keeps their cost down is deduplication, which stores only a single copy of repeating data. Client-side deduplication attempts to identify deduplication opportunities already at the client and save the bandwidth of uploading copies of existing files to the server. In this work we identify attacks that exploit client-side deduplication, allowing an attacker to gain access to arbitrary-size files of other users based on a very small hash signatures of these files. More specifically, an attacker who knows the hash signature of a file can convince the storage service that it owns that file, hence the server lets the attacker download the entire file. (In parallel to our work, a subset of these attacks were recently introduced in the wild with respect to the Dropbox file synchronization service.) To overcome such attacks, we introduce the notion of proofs-of-ownership (PoWs), which lets a client efficiently prove to a server that that the client holds a file, rather than just some short information about it. We formalize the concept of proof-of-ownership, under rigorous security definitions, and rigorous efficiency requirements of Petabyte scale storage systems. We then present solutions based on Merkle trees and specific encodings, and analyze their security. We implemented one variant of the scheme. Our performance measurements indicate that the scheme incurs only a small overhead compared to naive client-side deduplication.
{"title":"Proofs of ownership in remote storage systems","authors":"S. Halevi, Danny Harnik, Benny Pinkas, Alexandra Shulman-Peleg","doi":"10.1145/2046707.2046765","DOIUrl":"https://doi.org/10.1145/2046707.2046765","url":null,"abstract":"Cloud storage systems are becoming increasingly popular. A promising technology that keeps their cost down is deduplication, which stores only a single copy of repeating data. Client-side deduplication attempts to identify deduplication opportunities already at the client and save the bandwidth of uploading copies of existing files to the server. In this work we identify attacks that exploit client-side deduplication, allowing an attacker to gain access to arbitrary-size files of other users based on a very small hash signatures of these files. More specifically, an attacker who knows the hash signature of a file can convince the storage service that it owns that file, hence the server lets the attacker download the entire file. (In parallel to our work, a subset of these attacks were recently introduced in the wild with respect to the Dropbox file synchronization service.) To overcome such attacks, we introduce the notion of proofs-of-ownership (PoWs), which lets a client efficiently prove to a server that that the client holds a file, rather than just some short information about it. We formalize the concept of proof-of-ownership, under rigorous security definitions, and rigorous efficiency requirements of Petabyte scale storage systems. We then present solutions based on Merkle trees and specific encodings, and analyze their security. We implemented one variant of the scheme. Our performance measurements indicate that the scheme incurs only a small overhead compared to naive client-side deduplication.","PeriodicalId":72687,"journal":{"name":"Conference on Computer and Communications Security : proceedings of the ... conference on computer and communications security. ACM Conference on Computer and Communications Security","volume":"27 1","pages":"491-500"},"PeriodicalIF":0.0,"publicationDate":"2011-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74325814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Conference on Computer and Communications Security : proceedings of the ... conference on computer and communications security. ACM Conference on Computer and Communications Security