首页 > 最新文献

Conference on Computer and Communications Security : proceedings of the ... conference on computer and communications security. ACM Conference on Computer and Communications Security最新文献

英文 中文
(sp)iPhone: decoding vibrations from nearby keyboards using mobile phone accelerometers (sp)iPhone:使用手机加速度计解码附近键盘的振动
Philip Marquardt, A. Verma, Henry Carter, Patrick Traynor
Mobile phones are increasingly equipped with a range of highly responsive sensors. From cameras and GPS receivers to three-axis accelerometers, applications running on these devices are able to experience rich interactions with their environment. Unfortunately, some applications may be able to use such sensors to monitor their surroundings in unintended ways. In this paper, we demonstrate that an application with access to accelerometer readings on a modern mobile phone can use such information to recover text entered on a nearby keyboard. Note that unlike previous emanation recovery papers, the accelerometers on such devices sample at near the Nyquist rate, making previous techniques unworkable. Our application instead detects and decodes keystrokes by measuring the relative physical position and distance between each vibration. We then match abstracted words against candidate dictionaries and record word recovery rates as high as 80%. In so doing, we demonstrate the potential to recover significant information from the vicinity of a mobile device without gaining access to resources generally considered to be the most likely sources of leakage (e.g., microphone, camera).
移动电话越来越多地配备了一系列高响应传感器。从相机和GPS接收器到三轴加速度计,在这些设备上运行的应用程序能够与环境进行丰富的交互。不幸的是,一些应用程序可能会使用这种传感器以意想不到的方式监控周围环境。在本文中,我们证明了一个应用程序可以访问现代移动电话上的加速度计读数,可以使用这些信息来恢复在附近键盘上输入的文本。请注意,与以前的辐射恢复论文不同,这种设备上的加速度计的采样接近奈奎斯特速率,这使得以前的技术不可行。我们的应用程序通过测量每次振动之间的相对物理位置和距离来检测和解码击键。然后,我们将抽象的单词与候选字典进行匹配,并记录单词恢复率高达80%。在这样做的过程中,我们展示了从移动设备附近恢复重要信息的潜力,而无需访问通常被认为是最有可能的泄漏源的资源(例如,麦克风,相机)。
{"title":"(sp)iPhone: decoding vibrations from nearby keyboards using mobile phone accelerometers","authors":"Philip Marquardt, A. Verma, Henry Carter, Patrick Traynor","doi":"10.1145/2046707.2046771","DOIUrl":"https://doi.org/10.1145/2046707.2046771","url":null,"abstract":"Mobile phones are increasingly equipped with a range of highly responsive sensors. From cameras and GPS receivers to three-axis accelerometers, applications running on these devices are able to experience rich interactions with their environment. Unfortunately, some applications may be able to use such sensors to monitor their surroundings in unintended ways. In this paper, we demonstrate that an application with access to accelerometer readings on a modern mobile phone can use such information to recover text entered on a nearby keyboard. Note that unlike previous emanation recovery papers, the accelerometers on such devices sample at near the Nyquist rate, making previous techniques unworkable. Our application instead detects and decodes keystrokes by measuring the relative physical position and distance between each vibration. We then match abstracted words against candidate dictionaries and record word recovery rates as high as 80%. In so doing, we demonstrate the potential to recover significant information from the vicinity of a mobile device without gaining access to resources generally considered to be the most likely sources of leakage (e.g., microphone, camera).","PeriodicalId":72687,"journal":{"name":"Conference on Computer and Communications Security : proceedings of the ... conference on computer and communications security. ACM Conference on Computer and Communications Security","volume":"130 1","pages":"551-562"},"PeriodicalIF":0.0,"publicationDate":"2011-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77975005","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 288
Poster: Towards detecting DMA malware
Patrick Stewin, Jean-Pierre Seifert, Collin Mulliner
Malware residing in dedicated isolated hardware containing an auxiliary processor such as present in network, video, and CPU chipsets is an emerging security threat. To attack the host system, this kind of malware uses the direct memory access (DMA) functionality. By utilizing DMA, the host system can be fully compromised bypassing any kind of kernel level protection. Traditional anti-virus software is not capable to detect this kind of malware since the auxiliary systems are completely isolated from the host CPU. In this work we present our novel method that is capable of detecting this kind of malware. To understand the properties of such malware we evaluated a prototype that attacks the host via DMA. Our prototype is executed in the chipset of an x86 architecture. We identified key properties of such malware that are crucial for our detection method. Our detection mechanism is based on monitoring the side effects of rogue DMA usage performed by the malware. We believe that our detection mechanism is general and the first step in the detection of malware in dedicated isolated hardware.
恶意软件驻留在包含辅助处理器的专用隔离硬件中,例如存在于网络、视频和CPU芯片组中的恶意软件是一种新兴的安全威胁。为了攻击主机系统,这种恶意软件使用直接内存访问(DMA)功能。通过利用DMA,主机系统可以完全绕过任何类型的内核级保护。由于辅助系统与主机CPU完全隔离,传统的杀毒软件无法检测到此类恶意软件。在这项工作中,我们提出了一种能够检测这种恶意软件的新方法。为了了解这种恶意软件的属性,我们评估了一个通过DMA攻击主机的原型。我们的原型是在x86架构的芯片组中执行的。我们确定了此类恶意软件的关键属性,这些属性对我们的检测方法至关重要。我们的检测机制是基于监控恶意软件使用流氓DMA的副作用。我们相信我们的检测机制是通用的,并且是在专用隔离硬件中检测恶意软件的第一步。
{"title":"Poster: Towards detecting DMA malware","authors":"Patrick Stewin, Jean-Pierre Seifert, Collin Mulliner","doi":"10.1145/2046707.2093511","DOIUrl":"https://doi.org/10.1145/2046707.2093511","url":null,"abstract":"Malware residing in dedicated isolated hardware containing an auxiliary processor such as present in network, video, and CPU chipsets is an emerging security threat. To attack the host system, this kind of malware uses the direct memory access (DMA) functionality. By utilizing DMA, the host system can be fully compromised bypassing any kind of kernel level protection. Traditional anti-virus software is not capable to detect this kind of malware since the auxiliary systems are completely isolated from the host CPU. In this work we present our novel method that is capable of detecting this kind of malware. To understand the properties of such malware we evaluated a prototype that attacks the host via DMA. Our prototype is executed in the chipset of an x86 architecture. We identified key properties of such malware that are crucial for our detection method. Our detection mechanism is based on monitoring the side effects of rogue DMA usage performed by the malware. We believe that our detection mechanism is general and the first step in the detection of malware in dedicated isolated hardware.","PeriodicalId":72687,"journal":{"name":"Conference on Computer and Communications Security : proceedings of the ... conference on computer and communications security. ACM Conference on Computer and Communications Security","volume":"18 1","pages":"857-860"},"PeriodicalIF":0.0,"publicationDate":"2011-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79932890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Automated black-box detection of side-channel vulnerabilities in web applications 自动黑盒检测侧通道漏洞在web应用程序
Peter Chapman, David Evans
Web applications divide their state between the client and the server. The frequent and highly dynamic client-server communication that is characteristic of modern web applications leaves them vulnerable to side-channel leaks, even over encrypted connections. We describe a black-box tool for detecting and quantifying the severity of side-channel vulnerabilities by analyzing network traffic over repeated crawls of a web application. By viewing the adversary as a multi-dimensional classifier, we develop a methodology to more thoroughly measure the distinguishably of network traffic for a variety of classification metrics. We evaluate our detection system on several deployed web applications, accounting for proposed client and server-side defenses. Our results illustrate the limitations of entropy measurements used in previous work and show how our new metric based on the Fisher criterion can be used to more robustly reveal side-channels in web applications.
Web应用程序在客户机和服务器之间划分它们的状态。频繁和高度动态的客户端-服务器通信是现代web应用程序的特征,这使得它们容易受到侧通道泄漏的影响,甚至在加密连接上也是如此。我们描述了一个黑盒工具,通过分析web应用程序重复爬行的网络流量来检测和量化侧通道漏洞的严重性。通过将对手视为多维分类器,我们开发了一种方法,可以更彻底地测量各种分类指标的网络流量的可区分性。我们在几个已部署的web应用程序上评估我们的检测系统,考虑建议的客户端和服务器端防御。我们的结果说明了以前工作中使用的熵测量的局限性,并展示了我们基于Fisher准则的新度量如何用于更稳健地揭示web应用程序中的侧信道。
{"title":"Automated black-box detection of side-channel vulnerabilities in web applications","authors":"Peter Chapman, David Evans","doi":"10.1145/2046707.2046737","DOIUrl":"https://doi.org/10.1145/2046707.2046737","url":null,"abstract":"Web applications divide their state between the client and the server. The frequent and highly dynamic client-server communication that is characteristic of modern web applications leaves them vulnerable to side-channel leaks, even over encrypted connections. We describe a black-box tool for detecting and quantifying the severity of side-channel vulnerabilities by analyzing network traffic over repeated crawls of a web application. By viewing the adversary as a multi-dimensional classifier, we develop a methodology to more thoroughly measure the distinguishably of network traffic for a variety of classification metrics. We evaluate our detection system on several deployed web applications, accounting for proposed client and server-side defenses. Our results illustrate the limitations of entropy measurements used in previous work and show how our new metric based on the Fisher criterion can be used to more robustly reveal side-channels in web applications.","PeriodicalId":72687,"journal":{"name":"Conference on Computer and Communications Security : proceedings of the ... conference on computer and communications security. ACM Conference on Computer and Communications Security","volume":"22 1","pages":"263-274"},"PeriodicalIF":0.0,"publicationDate":"2011-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75343104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 73
Fashion crimes: trending-term exploitation on the web 时尚犯罪:利用网络流行词汇
T. Moore, Nektarios Leontiadis, Nicolas Christin
Online service providers are engaged in constant conflict with miscreants who try to siphon a portion of legitimate traffic to make illicit profits. We study the abuse of "trending" search terms, in which miscreants place links to malware-distributing or ad-filled web sites in web search and Twitter results, by collecting and analyzing measurements over nine months from multiple sources. We devise heuristics to identify ad-filled sites, report on the prevalence of malware and ad-filled sites in trending-term search results, and measure the success in blocking such content. We uncover collusion across offending domains using network analysis, and use regression analysis to conclude that both malware and ad-filled sites thrive on less popular, and less profitable trending terms. We build an economic model informed by our measurements and conclude that ad-filled sites and malware distribution may be economic substitutes. Finally, because our measurement interval spans February 2011, when Google announced changes to its ranking algorithm to root out low-quality sites, we can assess the impact of search-engine intervention on the profits miscreants can achieve.
在线服务提供商经常与不法分子发生冲突,这些不法分子试图吸走一部分合法流量以赚取非法利润。我们通过收集和分析来自多个来源的9个多月的测量数据,研究了滥用“趋势”搜索词,即不法分子在网络搜索和Twitter结果中放置恶意软件分发或广告填充网站的链接。我们设计了启发式方法来识别广告填充网站,报告恶意软件和广告填充网站在趋势搜索结果中的流行程度,并衡量阻止此类内容的成功程度。我们通过网络分析发现了违规域名之间的勾结,并使用回归分析得出结论,恶意软件和充斥广告的网站都在不太受欢迎、利润较低的趋势术语上茁壮成长。我们根据我们的测量建立了一个经济模型,并得出结论,充斥广告的网站和恶意软件的传播可能是经济上的替代品。最后,由于我们的测量间隔跨越2011年2月,当谷歌宣布改变其排名算法以根除低质量网站时,我们可以评估搜索引擎干预对不法分子可以获得的利润的影响。
{"title":"Fashion crimes: trending-term exploitation on the web","authors":"T. Moore, Nektarios Leontiadis, Nicolas Christin","doi":"10.1145/2046707.2046761","DOIUrl":"https://doi.org/10.1145/2046707.2046761","url":null,"abstract":"Online service providers are engaged in constant conflict with miscreants who try to siphon a portion of legitimate traffic to make illicit profits. We study the abuse of \"trending\" search terms, in which miscreants place links to malware-distributing or ad-filled web sites in web search and Twitter results, by collecting and analyzing measurements over nine months from multiple sources. We devise heuristics to identify ad-filled sites, report on the prevalence of malware and ad-filled sites in trending-term search results, and measure the success in blocking such content. We uncover collusion across offending domains using network analysis, and use regression analysis to conclude that both malware and ad-filled sites thrive on less popular, and less profitable trending terms. We build an economic model informed by our measurements and conclude that ad-filled sites and malware distribution may be economic substitutes. Finally, because our measurement interval spans February 2011, when Google announced changes to its ranking algorithm to root out low-quality sites, we can assess the impact of search-engine intervention on the profits miscreants can achieve.","PeriodicalId":72687,"journal":{"name":"Conference on Computer and Communications Security : proceedings of the ... conference on computer and communications security. ACM Conference on Computer and Communications Security","volume":"7 1","pages":"455-466"},"PeriodicalIF":0.0,"publicationDate":"2011-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87568177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 47
SCRIPTGARD: automatic context-sensitive sanitization for large-scale legacy web applications SCRIPTGARD:用于大规模遗留web应用程序的自动上下文敏感清理
P. Saxena, D. Molnar, B. Livshits
We empirically analyzed sanitizer use in a shipping web ap- plication with over 400,000 lines of code and over 23,244 methods, the largest empirical analysis of sanitizer use of which we are aware. Our analysis reveals two novel classes of errors: context-mismatched sanitization and inconsistent multiple sanitization. Both of these arise not because sanitizers are incorrectly implemented, but rather because they are not placed in code correctly. Much of the work on crosssite scripting detection to date has focused on finding missing sanitizers in programs of average size. In large legacy applications, other sanitization issues leading to cross-site scripting emerge. To address these errors, we propose ScriptGard, a system for ASP.NET applications which can detect and repair the incorrect placement of sanitizers. ScriptGard serves both as a testing aid to developers as well as a runtime mitigation technique. While mitigations for cross site scripting attacks have seen intense prior research, we consider both server and browser context, none of them achieve the same degree of precision, and many other mitigation techniques require major changes to server side code or to browsers. Our approach, in contrast, can be incrementally retrofitted to legacy systems with no changes to the source code and no browser changes. With our optimizations, when used for mitigation, ScriptGard incurs virtually no statistically significant overhead.
我们经验性地分析了在航运web应用程序中使用的杀菌剂,该应用程序有超过40万行代码和超过23,244种方法,这是我们所知道的最大的杀菌剂使用的经验性分析。我们的分析揭示了两类新的错误:上下文不匹配的清理和不一致的多重清理。这两种情况的出现都不是因为消毒器的实现不正确,而是因为它们没有正确地放置在代码中。迄今为止,跨站点脚本检测的大部分工作都集中在查找平均大小的程序中缺失的杀毒程序上。在大型遗留应用程序中,会出现导致跨站点脚本的其他清理问题。为了解决这些错误,我们提出了ScriptGard,一个用于ASP的系统。NET应用程序可以检测和修复杀毒程序的错误位置。ScriptGard既可以作为开发人员的测试辅助工具,也可以作为运行时缓解技术。虽然对跨站脚本攻击的缓解已经进行了大量的研究,但我们同时考虑了服务器和浏览器上下文,它们都没有达到相同的精度,而且许多其他缓解技术需要对服务器端代码或浏览器进行重大更改。相反,我们的方法可以在不更改源代码和不更改浏览器的情况下增量地对遗留系统进行改造。通过我们的优化,当用于缓解时,ScriptGard实际上不会产生统计上显著的开销。
{"title":"SCRIPTGARD: automatic context-sensitive sanitization for large-scale legacy web applications","authors":"P. Saxena, D. Molnar, B. Livshits","doi":"10.1145/2046707.2046776","DOIUrl":"https://doi.org/10.1145/2046707.2046776","url":null,"abstract":"We empirically analyzed sanitizer use in a shipping web ap- plication with over 400,000 lines of code and over 23,244 methods, the largest empirical analysis of sanitizer use of which we are aware. Our analysis reveals two novel classes of errors: context-mismatched sanitization and inconsistent multiple sanitization. Both of these arise not because sanitizers are incorrectly implemented, but rather because they are not placed in code correctly. Much of the work on crosssite scripting detection to date has focused on finding missing sanitizers in programs of average size. In large legacy applications, other sanitization issues leading to cross-site scripting emerge. To address these errors, we propose ScriptGard, a system for ASP.NET applications which can detect and repair the incorrect placement of sanitizers. ScriptGard serves both as a testing aid to developers as well as a runtime mitigation technique. While mitigations for cross site scripting attacks have seen intense prior research, we consider both server and browser context, none of them achieve the same degree of precision, and many other mitigation techniques require major changes to server side code or to browsers. Our approach, in contrast, can be incrementally retrofitted to legacy systems with no changes to the source code and no browser changes. With our optimizations, when used for mitigation, ScriptGard incurs virtually no statistically significant overhead.","PeriodicalId":72687,"journal":{"name":"Conference on Computer and Communications Security : proceedings of the ... conference on computer and communications security. ACM Conference on Computer and Communications Security","volume":"116 1","pages":"601-614"},"PeriodicalIF":0.0,"publicationDate":"2011-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84928820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 117
Deobfuscation of virtualization-obfuscated software: a semantics-based approach 虚拟化混淆软件的解混淆:基于语义的方法
Kevin Coogan, Gen Lu, S. Debray
When new malware are discovered, it is important for researchers to analyze and understand them as quickly as possible. This task has been made more difficult in recent years as researchers have seen an increasing use of virtualization-obfuscated malware code. These programs are difficult to comprehend and reverse engineer, since they are resistant to both static and dynamic analysis techniques. Current approaches to dealing with such code first reverse-engineer the byte code interpreter, then use this to work out the logic of the byte code program. This outside-in approach produces good results when the structure of the interpreter is known, but cannot be applied to all cases. This paper proposes a different approach to the problem that focuses on identifying instructions that affect the observable behavior of the obfuscated code. This inside-out approach requires fewer assumptions, and aims to complement existing techniques by broadening the domain of obfuscated programs eligible for automated analysis. Results from a prototype tool on real-world malicious code are encouraging.
当发现新的恶意软件时,重要的是研究人员要尽快分析和理解它们。近年来,这项任务变得更加困难,因为研究人员发现越来越多地使用虚拟化混淆的恶意软件代码。这些程序很难理解和逆向工程,因为它们对静态和动态分析技术都有抵抗力。当前处理此类代码的方法首先对字节码解释器进行逆向工程,然后使用它来计算字节码程序的逻辑。当解释器的结构已知时,这种由外而内的方法产生良好的结果,但不能适用于所有情况。本文提出了一种不同的方法来解决这个问题,该方法侧重于识别影响混淆代码的可观察行为的指令。这种由内而外的方法需要更少的假设,并且旨在通过扩大适合自动化分析的模糊程序的领域来补充现有的技术。来自真实世界恶意代码原型工具的结果令人鼓舞。
{"title":"Deobfuscation of virtualization-obfuscated software: a semantics-based approach","authors":"Kevin Coogan, Gen Lu, S. Debray","doi":"10.1145/2046707.2046739","DOIUrl":"https://doi.org/10.1145/2046707.2046739","url":null,"abstract":"When new malware are discovered, it is important for researchers to analyze and understand them as quickly as possible. This task has been made more difficult in recent years as researchers have seen an increasing use of virtualization-obfuscated malware code. These programs are difficult to comprehend and reverse engineer, since they are resistant to both static and dynamic analysis techniques. Current approaches to dealing with such code first reverse-engineer the byte code interpreter, then use this to work out the logic of the byte code program. This outside-in approach produces good results when the structure of the interpreter is known, but cannot be applied to all cases. This paper proposes a different approach to the problem that focuses on identifying instructions that affect the observable behavior of the obfuscated code. This inside-out approach requires fewer assumptions, and aims to complement existing techniques by broadening the domain of obfuscated programs eligible for automated analysis. Results from a prototype tool on real-world malicious code are encouraging.","PeriodicalId":72687,"journal":{"name":"Conference on Computer and Communications Security : proceedings of the ... conference on computer and communications security. ACM Conference on Computer and Communications Security","volume":"2016 1","pages":"275-284"},"PeriodicalIF":0.0,"publicationDate":"2011-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86464033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 127
Demo: secure computation in JavaScript 演示:JavaScript中的安全计算
Axel Schroepfer, F. Kerschbaum
Secure computation, e.g. using Yao's garbled circuit protocol, allows two parties to compute arbitrary functions without disclosing their inputs. A profitable application of secure computation is business optimization. It is characterized by a monetary benefit for all participants and a high confidentiality of their respective input data. In most instances the consequences of input disclosure, e.g. loss of bargaining power, outweigh the benefits of collaboration. Therefore these optimizations are currently not performed in industrial practice. Our demo shows such an optimization as a secure computation. The joint economic lot size (JELS) is the optimal order quantity between a buyer and supplier. We implemented Yao's protocol in JavaScript, such that it can be executed using two web browsers. This has the additional benefit that the software can be offered as a service (SaaS) and can be easily integrated with other SaaS offerings, e.g. using mash-up technology.
安全计算,例如使用Yao的乱码电路协议,允许双方计算任意函数而不泄露其输入。安全计算的一个有益应用是业务优化。它的特点是对所有参与者都有金钱上的好处,并且对他们各自的输入数据有很高的保密性。在大多数情况下,投入披露的后果,例如丧失议价能力,超过了合作的好处。因此,这些优化目前没有在工业实践中执行。我们的演示将这种优化作为安全计算来展示。联合经济批量(JELS)是买方和供应商之间的最优订货数量。我们用JavaScript实现了Yao的协议,这样它就可以在两个web浏览器上执行。这还有一个额外的好处,即软件可以作为服务(SaaS)提供,并且可以轻松地与其他SaaS产品集成,例如使用混搭技术。
{"title":"Demo: secure computation in JavaScript","authors":"Axel Schroepfer, F. Kerschbaum","doi":"10.1145/2046707.2093509","DOIUrl":"https://doi.org/10.1145/2046707.2093509","url":null,"abstract":"Secure computation, e.g. using Yao's garbled circuit protocol, allows two parties to compute arbitrary functions without disclosing their inputs. A profitable application of secure computation is business optimization. It is characterized by a monetary benefit for all participants and a high confidentiality of their respective input data. In most instances the consequences of input disclosure, e.g. loss of bargaining power, outweigh the benefits of collaboration. Therefore these optimizations are currently not performed in industrial practice.\u0000 Our demo shows such an optimization as a secure computation. The joint economic lot size (JELS) is the optimal order quantity between a buyer and supplier. We implemented Yao's protocol in JavaScript, such that it can be executed using two web browsers. This has the additional benefit that the software can be offered as a service (SaaS) and can be easily integrated with other SaaS offerings, e.g. using mash-up technology.","PeriodicalId":72687,"journal":{"name":"Conference on Computer and Communications Security : proceedings of the ... conference on computer and communications security. ACM Conference on Computer and Communications Security","volume":"19 1","pages":"849-852"},"PeriodicalIF":0.0,"publicationDate":"2011-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75077344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
SURF: detecting and measuring search poisoning SURF:检测和测量搜索中毒
Long Lu, R. Perdisci, Wenke Lee
Search engine optimization (SEO) techniques are often abused to promote websites among search results. This is a practice known as blackhat SEO. In this paper we tackle a newly emerging and especially aggressive class of blackhat SEO, namely search poisoning. Unlike other blackhat SEO techniques, which typically attempt to promote a website's ranking only under a limited set of search keywords relevant to the website's content, search poisoning techniques disregard any term relevance constraint and are employed to poison popular search keywords with the sole purpose of diverting large numbers of users to short-lived traffic-hungry websites for malicious purposes. To accurately detect search poisoning cases, we designed a novel detection system called SURF. SURF runs as a browser component to extract a number of robust (i.e., difficult to evade) detection features from search-then-visit browsing sessions, and is able to accurately classify malicious search user redirections resulted from user clicking on poisoned search results. Our evaluation on real-world search poisoning instances shows that SURF can achieve a detection rate of 99.1% at a false positive rate of 0.9%. Furthermore, we applied SURF to analyze a large dataset of search-related browsing sessions collected over a period of seven months starting in September 2010. Through this long-term measurement study we were able to reveal new trends and interesting patterns related to a great variety of poisoning cases, thus contributing to a better understanding of the prevalence and gravity of the search poisoning problem.
搜索引擎优化(SEO)技术经常被滥用来在搜索结果中推广网站。这是一种被称为黑帽SEO的做法。在本文中,我们解决了一个新兴的和特别积极类的黑帽搜索引擎优化,即搜索中毒。与其他黑帽搜索引擎优化技术不同,黑帽搜索引擎优化技术通常只尝试在与网站内容相关的有限搜索关键字集下提升网站的排名,搜索中毒技术无视任何术语相关性约束,并用于毒害热门搜索关键字,其唯一目的是将大量用户转移到短期流量饥渴的网站,以达到恶意目的。为了准确检测搜索中毒案件,我们设计了一种新的检测系统SURF。SURF作为浏览器组件运行,从“搜索-然后访问”浏览会话中提取大量鲁棒(即难以逃避)检测功能,并能够准确分类由用户点击有毒搜索结果引起的恶意搜索用户重定向。我们对真实搜索中毒实例的评估表明,SURF可以达到99.1%的检测率,假阳性率为0.9%。此外,我们应用SURF分析了从2010年9月开始的七个月内收集的与搜索相关的浏览会话的大型数据集。通过这项长期的测量研究,我们能够揭示与各种中毒案件有关的新趋势和有趣的模式,从而有助于更好地了解搜索中毒问题的普遍性和严重性。
{"title":"SURF: detecting and measuring search poisoning","authors":"Long Lu, R. Perdisci, Wenke Lee","doi":"10.1145/2046707.2046762","DOIUrl":"https://doi.org/10.1145/2046707.2046762","url":null,"abstract":"Search engine optimization (SEO) techniques are often abused to promote websites among search results. This is a practice known as blackhat SEO. In this paper we tackle a newly emerging and especially aggressive class of blackhat SEO, namely search poisoning. Unlike other blackhat SEO techniques, which typically attempt to promote a website's ranking only under a limited set of search keywords relevant to the website's content, search poisoning techniques disregard any term relevance constraint and are employed to poison popular search keywords with the sole purpose of diverting large numbers of users to short-lived traffic-hungry websites for malicious purposes. To accurately detect search poisoning cases, we designed a novel detection system called SURF. SURF runs as a browser component to extract a number of robust (i.e., difficult to evade) detection features from search-then-visit browsing sessions, and is able to accurately classify malicious search user redirections resulted from user clicking on poisoned search results. Our evaluation on real-world search poisoning instances shows that SURF can achieve a detection rate of 99.1% at a false positive rate of 0.9%. Furthermore, we applied SURF to analyze a large dataset of search-related browsing sessions collected over a period of seven months starting in September 2010. Through this long-term measurement study we were able to reveal new trends and interesting patterns related to a great variety of poisoning cases, thus contributing to a better understanding of the prevalence and gravity of the search poisoning problem.","PeriodicalId":72687,"journal":{"name":"Conference on Computer and Communications Security : proceedings of the ... conference on computer and communications security. ACM Conference on Computer and Communications Security","volume":"31 11 1","pages":"467-476"},"PeriodicalIF":0.0,"publicationDate":"2011-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79686362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 103
Poster: on quantitative information flow metrics 海报:关于定量信息流指标
Ji Zhu, M. Srivatsa
Information flow analysis is a powerful technique for reasoning about sensitive information that may be exposed during program execution. One promising approach is to adopt a program as a communication channel model and leverage information theoretic metrics to quantify such information flows. However, recent research has shown discrepancies in such metrics: for example, Smith et. al. [5] showed examples wherein using the classical Shannon entropy measure for quantifying information flows may be counter-intuitive. Smith et. al. [5] proposed a vulnerability measure in an attempt to resolve this problem, and this measure was subsequently enhanced by Hamadou et. al. [2] into a beliefvulnerability metric. However, as pointed out by Smith et. al., the vulnerability metric fails to distinguish between certain classes of programs (such as the password checker and the binary search program). In this paper, we propose a simple and intuitive approach to quantify program information leakage as a probability distribution over the residual uncertainty of the high input whose mean, variance and worst case measures offer insights into program vulnerability.
信息流分析是一种强大的技术,用于推断程序执行期间可能暴露的敏感信息。一种有希望的方法是采用程序作为通信通道模型,并利用信息理论度量来量化这种信息流。然而,最近的研究显示了这些度量的差异:例如,Smith等人展示了使用经典香农熵度量来量化信息流可能违反直觉的例子。Smith等人[5]提出了一个漏洞度量,试图解决这一问题,Hamadou等人[5]随后将该度量增强为可信度漏洞度量。然而,正如Smith等人指出的那样,漏洞度量无法区分某些类别的程序(如密码检查程序和二进制搜索程序)。在本文中,我们提出了一种简单直观的方法,将程序信息泄漏量化为高输入的残差不确定性的概率分布,其均值、方差和最坏情况度量提供了对程序脆弱性的见解。
{"title":"Poster: on quantitative information flow metrics","authors":"Ji Zhu, M. Srivatsa","doi":"10.1145/2046707.2093516","DOIUrl":"https://doi.org/10.1145/2046707.2093516","url":null,"abstract":"Information flow analysis is a powerful technique for reasoning about sensitive information that may be exposed during program execution. One promising approach is to adopt a program as a communication channel model and leverage information theoretic metrics to quantify such information flows. However, recent research has shown discrepancies in such metrics: for example, Smith et. al. [5] showed examples wherein using the classical Shannon entropy measure for quantifying information flows may be counter-intuitive. Smith et. al. [5] proposed a vulnerability measure in an attempt to resolve this problem, and this measure was subsequently enhanced by Hamadou et. al. [2] into a beliefvulnerability metric. However, as pointed out by Smith et. al., the vulnerability metric fails to distinguish between certain classes of programs (such as the password checker and the binary search program). In this paper, we propose a simple and intuitive approach to quantify program information leakage as a probability distribution over the residual uncertainty of the high input whose mean, variance and worst case measures offer insights into program vulnerability.","PeriodicalId":72687,"journal":{"name":"Conference on Computer and Communications Security : proceedings of the ... conference on computer and communications security. ACM Conference on Computer and Communications Security","volume":"9 1","pages":"877-880"},"PeriodicalIF":0.0,"publicationDate":"2011-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84238216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Proofs of ownership in remote storage systems 远程存储系统的所有权证明
S. Halevi, Danny Harnik, Benny Pinkas, Alexandra Shulman-Peleg
Cloud storage systems are becoming increasingly popular. A promising technology that keeps their cost down is deduplication, which stores only a single copy of repeating data. Client-side deduplication attempts to identify deduplication opportunities already at the client and save the bandwidth of uploading copies of existing files to the server. In this work we identify attacks that exploit client-side deduplication, allowing an attacker to gain access to arbitrary-size files of other users based on a very small hash signatures of these files. More specifically, an attacker who knows the hash signature of a file can convince the storage service that it owns that file, hence the server lets the attacker download the entire file. (In parallel to our work, a subset of these attacks were recently introduced in the wild with respect to the Dropbox file synchronization service.) To overcome such attacks, we introduce the notion of proofs-of-ownership (PoWs), which lets a client efficiently prove to a server that that the client holds a file, rather than just some short information about it. We formalize the concept of proof-of-ownership, under rigorous security definitions, and rigorous efficiency requirements of Petabyte scale storage systems. We then present solutions based on Merkle trees and specific encodings, and analyze their security. We implemented one variant of the scheme. Our performance measurements indicate that the scheme incurs only a small overhead compared to naive client-side deduplication.
云存储系统正变得越来越流行。一项很有前途的降低成本的技术是重复数据删除,它只存储重复数据的一个副本。客户端重复数据删除尝试识别客户端已经存在的重复数据删除机会,并节省将现有文件副本上传到服务器的带宽。在这项工作中,我们识别了利用客户端重复数据删除的攻击,允许攻击者基于这些文件的非常小的哈希签名访问其他用户的任意大小的文件。更具体地说,知道文件散列签名的攻击者可以让存储服务相信它拥有该文件,因此服务器允许攻击者下载整个文件。(与我们的工作并行的是,这些攻击的一个子集最近被引入了Dropbox文件同步服务。)为了克服这种攻击,我们引入了所有权证明(pow)的概念,它允许客户端有效地向服务器证明其持有文件,而不仅仅是关于该文件的一些简短信息。在严格的安全定义和pb级存储系统的严格效率要求下,我们形式化了所有权证明的概念。然后,我们提出了基于默克尔树和特定编码的解决方案,并分析了它们的安全性。我们实现了该方案的一个变体。我们的性能测量表明,与简单的客户端重复数据删除相比,该方案只产生很小的开销。
{"title":"Proofs of ownership in remote storage systems","authors":"S. Halevi, Danny Harnik, Benny Pinkas, Alexandra Shulman-Peleg","doi":"10.1145/2046707.2046765","DOIUrl":"https://doi.org/10.1145/2046707.2046765","url":null,"abstract":"Cloud storage systems are becoming increasingly popular. A promising technology that keeps their cost down is deduplication, which stores only a single copy of repeating data. Client-side deduplication attempts to identify deduplication opportunities already at the client and save the bandwidth of uploading copies of existing files to the server. In this work we identify attacks that exploit client-side deduplication, allowing an attacker to gain access to arbitrary-size files of other users based on a very small hash signatures of these files. More specifically, an attacker who knows the hash signature of a file can convince the storage service that it owns that file, hence the server lets the attacker download the entire file. (In parallel to our work, a subset of these attacks were recently introduced in the wild with respect to the Dropbox file synchronization service.) To overcome such attacks, we introduce the notion of proofs-of-ownership (PoWs), which lets a client efficiently prove to a server that that the client holds a file, rather than just some short information about it. We formalize the concept of proof-of-ownership, under rigorous security definitions, and rigorous efficiency requirements of Petabyte scale storage systems. We then present solutions based on Merkle trees and specific encodings, and analyze their security. We implemented one variant of the scheme. Our performance measurements indicate that the scheme incurs only a small overhead compared to naive client-side deduplication.","PeriodicalId":72687,"journal":{"name":"Conference on Computer and Communications Security : proceedings of the ... conference on computer and communications security. ACM Conference on Computer and Communications Security","volume":"27 1","pages":"491-500"},"PeriodicalIF":0.0,"publicationDate":"2011-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74325814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 723
期刊
Conference on Computer and Communications Security : proceedings of the ... conference on computer and communications security. ACM Conference on Computer and Communications Security
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1