Network tarpits, whereby a single host or appliance can masquerade as many fake hosts on a network and slow network scanners, are a form of defensive cyber-deception. In this work, we develop degreaser, an efficient fingerprinting tool to remotely detect tarpits. In addition to validating our tool in a controlled environment, we use degreaser to perform an Internet-wide scan. We discover tarpits of non-trivial size in the wild (prefixes as large as/16), and characterize their distribution and behavior. We then show how tarpits pollute existing network measurement surveys that are tarpit-naïve, e.g. Internet census data, and how degreaser can improve the accuracy of such surveys. Lastly, our findings suggest several ways in which to advance the realism of current network tarpits, thereby raising the bar on tarpits as an operational security mechanism.
{"title":"Uncovering network tarpits with degreaser","authors":"L. Alt, R. Beverly, A. Dainotti","doi":"10.1145/2664243.2664285","DOIUrl":"https://doi.org/10.1145/2664243.2664285","url":null,"abstract":"Network tarpits, whereby a single host or appliance can masquerade as many fake hosts on a network and slow network scanners, are a form of defensive cyber-deception. In this work, we develop degreaser, an efficient fingerprinting tool to remotely detect tarpits. In addition to validating our tool in a controlled environment, we use degreaser to perform an Internet-wide scan. We discover tarpits of non-trivial size in the wild (prefixes as large as/16), and characterize their distribution and behavior. We then show how tarpits pollute existing network measurement surveys that are tarpit-naïve, e.g. Internet census data, and how degreaser can improve the accuracy of such surveys. Lastly, our findings suggest several ways in which to advance the realism of current network tarpits, thereby raising the bar on tarpits as an operational security mechanism.","PeriodicalId":104443,"journal":{"name":"Proceedings of the 30th Annual Computer Security Applications Conference","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125578523","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yiming Jing, Ziming Zhao, Gail-Joon Ahn, Hongxin Hu
Emulator-based dynamic analysis has been widely deployed in Android application stores. While it has been proven effective in vetting applications on a large scale, it can be detected and evaded by recent Android malware strains that carry detection heuristics. Using such heuristics, an application can check the presence or contents of certain artifacts and infer the presence of emulators. However, there exists little work that systematically discovers those heuristics that would be eventually helpful to prevent malicious applications from bypassing emulator-based analysis. To cope with this challenge, we propose a framework called Morpheus that automatically generates such heuristics. Morpheus leverages our insight that an effective detection heuristic must exploit discrepancies observable by an application. To this end, Morpheus analyzes the application sandbox and retrieves observable artifacts from both Android emulators and real devices. Afterwards, Morpheus further analyzes the retrieved artifacts to extract and rank detection heuristics. The evaluation of our proof-of-concept implementation of Morpheus reveals more than 10,000 novel detection heuristics that can be utilized to detect existing emulator-based malware analysis tools. We also discuss the discrepancies in Android emulators and potential countermeasures.
{"title":"Morpheus: automatically generating heuristics to detect Android emulators","authors":"Yiming Jing, Ziming Zhao, Gail-Joon Ahn, Hongxin Hu","doi":"10.1145/2664243.2664250","DOIUrl":"https://doi.org/10.1145/2664243.2664250","url":null,"abstract":"Emulator-based dynamic analysis has been widely deployed in Android application stores. While it has been proven effective in vetting applications on a large scale, it can be detected and evaded by recent Android malware strains that carry detection heuristics. Using such heuristics, an application can check the presence or contents of certain artifacts and infer the presence of emulators. However, there exists little work that systematically discovers those heuristics that would be eventually helpful to prevent malicious applications from bypassing emulator-based analysis. To cope with this challenge, we propose a framework called Morpheus that automatically generates such heuristics. Morpheus leverages our insight that an effective detection heuristic must exploit discrepancies observable by an application. To this end, Morpheus analyzes the application sandbox and retrieves observable artifacts from both Android emulators and real devices. Afterwards, Morpheus further analyzes the retrieved artifacts to extract and rank detection heuristics. The evaluation of our proof-of-concept implementation of Morpheus reveals more than 10,000 novel detection heuristics that can be utilized to detect existing emulator-based malware analysis tools. We also discuss the discrepancies in Android emulators and potential countermeasures.","PeriodicalId":104443,"journal":{"name":"Proceedings of the 30th Annual Computer Security Applications Conference","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128031466","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Marios Pomonis, Theofilos Petsios, Kangkook Jee, M. Polychronakis, A. Keromytis
Integer overflow and underflow, signedness conversion, and other types of arithmetic errors in C/C++ programs are among the most common software flaws that result in exploitable vulnerabilities. Despite significant advances in automating the detection of arithmetic errors, existing tools have not seen widespread adoption mainly due to their increased number of false positives. Developers rely on wrap-around counters, bit shifts, and other language constructs for performance optimizations and code compactness, but those same constructs, along with incorrect assumptions and conditions of undefined behavior, are often the main cause of severe vulnerabilities. Accurate differentiation between legitimate and erroneous uses of arithmetic language intricacies thus remains an open problem. As a step towards addressing this issue, we present IntFlow, an accurate arithmetic error detection tool that combines static information flow tracking and dynamic program analysis. By associating sources of untrusted input with the identified arithmetic errors, IntFlow differentiates between non-critical, possibly developer-intended undefined arithmetic operations, and potentially exploitable arithmetic bugs. IntFlow examines a broad set of integer errors, covering almost all cases of C/C++ undefined behaviors, and achieves high error detection coverage. We evaluated IntFlow using the SPEC benchmarks and a series of real-world applications, and measured its effectiveness in detecting arithmetic error vulnerabilities and reducing false positives. IntFlow successfully detected all real-world vulnerabilities for the tested applications and achieved a reduction of 89% in false positives over standalone static code instrumentation.
{"title":"IntFlow: improving the accuracy of arithmetic error detection using information flow tracking","authors":"Marios Pomonis, Theofilos Petsios, Kangkook Jee, M. Polychronakis, A. Keromytis","doi":"10.1145/2664243.2664282","DOIUrl":"https://doi.org/10.1145/2664243.2664282","url":null,"abstract":"Integer overflow and underflow, signedness conversion, and other types of arithmetic errors in C/C++ programs are among the most common software flaws that result in exploitable vulnerabilities. Despite significant advances in automating the detection of arithmetic errors, existing tools have not seen widespread adoption mainly due to their increased number of false positives. Developers rely on wrap-around counters, bit shifts, and other language constructs for performance optimizations and code compactness, but those same constructs, along with incorrect assumptions and conditions of undefined behavior, are often the main cause of severe vulnerabilities. Accurate differentiation between legitimate and erroneous uses of arithmetic language intricacies thus remains an open problem. As a step towards addressing this issue, we present IntFlow, an accurate arithmetic error detection tool that combines static information flow tracking and dynamic program analysis. By associating sources of untrusted input with the identified arithmetic errors, IntFlow differentiates between non-critical, possibly developer-intended undefined arithmetic operations, and potentially exploitable arithmetic bugs. IntFlow examines a broad set of integer errors, covering almost all cases of C/C++ undefined behaviors, and achieves high error detection coverage. We evaluated IntFlow using the SPEC benchmarks and a series of real-world applications, and measured its effectiveness in detecting arithmetic error vulnerabilities and reducing false positives. IntFlow successfully detected all real-world vulnerabilities for the tested applications and achieved a reduction of 89% in false positives over standalone static code instrumentation.","PeriodicalId":104443,"journal":{"name":"Proceedings of the 30th Annual Computer Security Applications Conference","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113975864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper reports the results of a user study of the Android graphical password system using an alternative survey methodology, pairwise preferences, that requests participants to select between pairs of patterns indicating either a security or usability preference. By carefully selecting password pairs to isolate a visual feature, a visual perception of usability and security of different features can be measured. We conducted a large IRB-approved survey using pairwise preferences which attracted 384 participants on Amazon Mechanical Turk. Analyzing the results, we find that visual features that can be attributed to complexity indicated a stronger perception of security, while spatial features, such as shifts up/down or left/right are not strong indicators for security or usability. We extended and applied the survey data by building logistic models to predict perception preferences by training on features used in the survey and other features proposed in related work. The logistic model accurately predicted preferences above 70%, twice the rate of random guessing, and the strongest feature in classification is password distance, the total length of all lines in the pattern, a feature not used in the online survey. This result provides insight into the internal visual calculus of users when comparing choices and selecting visual passwords, and the ultimate goal of this work is to leverage the visual calculus to design systems where inherent perceptions for usability coincides with a known metric of security.
{"title":"Understanding visual perceptions of usability and security of Android's graphical password pattern","authors":"Adam J. Aviv, Dane Fichter","doi":"10.1145/2664243.2664253","DOIUrl":"https://doi.org/10.1145/2664243.2664253","url":null,"abstract":"This paper reports the results of a user study of the Android graphical password system using an alternative survey methodology, pairwise preferences, that requests participants to select between pairs of patterns indicating either a security or usability preference. By carefully selecting password pairs to isolate a visual feature, a visual perception of usability and security of different features can be measured. We conducted a large IRB-approved survey using pairwise preferences which attracted 384 participants on Amazon Mechanical Turk. Analyzing the results, we find that visual features that can be attributed to complexity indicated a stronger perception of security, while spatial features, such as shifts up/down or left/right are not strong indicators for security or usability. We extended and applied the survey data by building logistic models to predict perception preferences by training on features used in the survey and other features proposed in related work. The logistic model accurately predicted preferences above 70%, twice the rate of random guessing, and the strongest feature in classification is password distance, the total length of all lines in the pattern, a feature not used in the online survey. This result provides insight into the internal visual calculus of users when comparing choices and selecting visual passwords, and the ultimate goal of this work is to leverage the visual calculus to design systems where inherent perceptions for usability coincides with a known metric of security.","PeriodicalId":104443,"journal":{"name":"Proceedings of the 30th Annual Computer Security Applications Conference","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124858588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose a transport layer cipher-suite negotiation mechanism for DNSSEC standard, allowing name-servers to send responses containing only the keys and signatures that correspond to the cipher-suite option negotiated with the resolver, rather than sending all the signatures and keys (as is done currently). As we show, a lack of cipher-suite negotiation, is one of the factors impeding deployment of DNSSEC, and also results in adoption of weak ciphers. Indeed, the vast majority of domains rely on RSA 1024-bit cryptography, which is already considered insecure. Furthermore, domains, that want better security, have to support a number of cryptographic ciphers. As a result, the DNSSEC responses are large and often fragmented, harming the DNS functionality, and causing inefficiency and vulnerabilities. A cipher-suite negotiation mechanism reduces responses' sizes, and hence solves the interoperability problems with DNSSEC-signed responses, and prevents reflection and cache poisoning attacks.
{"title":"Less is more: cipher-suite negotiation for DNSSEC","authors":"A. Herzberg, Haya Shulman, B. Crispo","doi":"10.1145/2664243.2664283","DOIUrl":"https://doi.org/10.1145/2664243.2664283","url":null,"abstract":"We propose a transport layer cipher-suite negotiation mechanism for DNSSEC standard, allowing name-servers to send responses containing only the keys and signatures that correspond to the cipher-suite option negotiated with the resolver, rather than sending all the signatures and keys (as is done currently). As we show, a lack of cipher-suite negotiation, is one of the factors impeding deployment of DNSSEC, and also results in adoption of weak ciphers. Indeed, the vast majority of domains rely on RSA 1024-bit cryptography, which is already considered insecure. Furthermore, domains, that want better security, have to support a number of cryptographic ciphers. As a result, the DNSSEC responses are large and often fragmented, harming the DNS functionality, and causing inefficiency and vulnerabilities. A cipher-suite negotiation mechanism reduces responses' sizes, and hence solves the interoperability problems with DNSSEC-signed responses, and prevents reflection and cache poisoning attacks.","PeriodicalId":104443,"journal":{"name":"Proceedings of the 30th Annual Computer Security Applications Conference","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130159613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Google's Android OS provides a lightweight IPC mechanism called Binder, which enables the development of feature-rich apps that seamlessly integrate services and data of other apps. Whenever apps can act both as service consumers and service providers, it is inevitable that the IPC mechanism provides message receivers with message provenance information to establish trust. However, the Android OS currently fails in providing sufficient provenance information, which has led to a number of attacks. We present an extension to the Android IPC mechanism, called Scippa, that establishes IPC call-chains across application processes. Scippa provides provenance information required to effectively prevent recent attacks such as confused deputy attacks. Our solution constitutes a system-centric approach that extends the Binder kernel module and Android's message handlers. Scippa integrates seamlessly into the system architecture and our evaluation shows a performance overhead of only 2.23% on Android OS v4.2.2.
谷歌的Android操作系统提供了一种轻量级的IPC机制,叫做Binder,它使开发功能丰富的应用程序能够无缝地集成其他应用程序的服务和数据。当应用程序同时充当服务消费者和服务提供者时,IPC机制不可避免地向消息接收者提供消息来源信息以建立信任。然而,Android操作系统目前未能提供足够的来源信息,这导致了许多攻击。我们提出了Android IPC机制的扩展,称为Scippa,它在应用程序进程之间建立IPC调用链。Scippa提供了有效防止最近的攻击(如混乱的代理攻击)所需的来源信息。我们的解决方案构成了一个以系统为中心的方法,扩展了Binder内核模块和Android的消息处理程序。Scippa无缝集成到系统架构中,我们的评估显示,在Android OS v4.2.2上,Scippa的性能开销仅为2.23%。
{"title":"Scippa: system-centric IPC provenance on Android","authors":"M. Backes, Sven Bugiel, S. Gerling","doi":"10.1145/2664243.2664264","DOIUrl":"https://doi.org/10.1145/2664243.2664264","url":null,"abstract":"Google's Android OS provides a lightweight IPC mechanism called Binder, which enables the development of feature-rich apps that seamlessly integrate services and data of other apps. Whenever apps can act both as service consumers and service providers, it is inevitable that the IPC mechanism provides message receivers with message provenance information to establish trust. However, the Android OS currently fails in providing sufficient provenance information, which has led to a number of attacks. We present an extension to the Android IPC mechanism, called Scippa, that establishes IPC call-chains across application processes. Scippa provides provenance information required to effectively prevent recent attacks such as confused deputy attacks. Our solution constitutes a system-centric approach that extends the Binder kernel module and Android's message handlers. Scippa integrates seamlessly into the system architecture and our evaluation shows a performance overhead of only 2.23% on Android OS v4.2.2.","PeriodicalId":104443,"journal":{"name":"Proceedings of the 30th Annual Computer Security Applications Conference","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129414474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mingshen Sun, Min Zheng, John C.S. Lui, Xuxian Jiang
Android has a dominating share in the mobile market and there is a significant rise of mobile malware targeting Android devices. Android malware accounted for 97% of all mobile threats in 2013 [26]. To protect smartphones and prevent privacy leakage, companies have implemented various host-based intrusion prevention systems (HIPS) on their Android devices. In this paper, we first analyze the implementations, strengths and weaknesses of three popular HIPS architectures. We demonstrate a severe loophole and weakness of an existing popular HIPS product in which hackers can readily exploit. Then we present a design and implementation of a secure and extensible HIPS platform---"Patronus." Patronus not only provides intrusion prevention without the need to modify the Android system, it can also dynamically detect existing malware based on runtime information. We propose a two-phase dynamic detection algorithm for detecting running malware. Our experiments show that Patronus can prevent the intrusive behaviors efficiently and detect malware accurately with a very low performance overhead and power consumption.
{"title":"Design and implementation of an Android host-based intrusion prevention system","authors":"Mingshen Sun, Min Zheng, John C.S. Lui, Xuxian Jiang","doi":"10.1145/2664243.2664245","DOIUrl":"https://doi.org/10.1145/2664243.2664245","url":null,"abstract":"Android has a dominating share in the mobile market and there is a significant rise of mobile malware targeting Android devices. Android malware accounted for 97% of all mobile threats in 2013 [26]. To protect smartphones and prevent privacy leakage, companies have implemented various host-based intrusion prevention systems (HIPS) on their Android devices. In this paper, we first analyze the implementations, strengths and weaknesses of three popular HIPS architectures. We demonstrate a severe loophole and weakness of an existing popular HIPS product in which hackers can readily exploit. Then we present a design and implementation of a secure and extensible HIPS platform---\"Patronus.\" Patronus not only provides intrusion prevention without the need to modify the Android system, it can also dynamically detect existing malware based on runtime information. We propose a two-phase dynamic detection algorithm for detecting running malware. Our experiments show that Patronus can prevent the intrusive behaviors efficiently and detect malware accurately with a very low performance overhead and power consumption.","PeriodicalId":104443,"journal":{"name":"Proceedings of the 30th Annual Computer Security Applications Conference","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124434274","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Privacy and personalization of mobile experiences are inherently in conflict: better personalization demands knowing more about the user, potentially violating user privacy. A promising approach to mitigate this tension is to migrate personalization to the client, an approach dubbed client-side personalization. This paper advocates for operating system support for client-side personalization and describes MoRePriv, an operating system service implemented in the Windows Phone OS. We argue that personalization support should be as ubiquitous as location support, and should be provided by a unified system within the OS, instead of by individual apps. We aim to provide a solution that will stoke innovation around mobile personalization. To enable easy application personalization, MoRePriv approximates users' interests using personae such as technophile or business executive. Using a number of case studies and crowd-sourced user studies, we illustrate how more complex personalization tasks can be achieved using MoRePriv. For privacy protection, MoRePriv distills sensitive user information to a coarse-grained profile, which limits the potential damage from information leaks. We see MoRePriv as a way to increase end-user privacy by enabling client-side computing, thus minimizing the need to share user data with the server. As such, MoRePriv shepherds the ecosystem towards a better privacy stance by nudging developers away from today's privacy-violating practices. Furthermore, MoRePriv can be combined with privacy-enhancing technologies and is complimentary to recent advances in data leak detection.
{"title":"MoRePriv: mobile OS support for application personalization and privacy","authors":"Drew Davidson, Matt Fredrikson, B. Livshits","doi":"10.1145/2664243.2664266","DOIUrl":"https://doi.org/10.1145/2664243.2664266","url":null,"abstract":"Privacy and personalization of mobile experiences are inherently in conflict: better personalization demands knowing more about the user, potentially violating user privacy. A promising approach to mitigate this tension is to migrate personalization to the client, an approach dubbed client-side personalization. This paper advocates for operating system support for client-side personalization and describes MoRePriv, an operating system service implemented in the Windows Phone OS. We argue that personalization support should be as ubiquitous as location support, and should be provided by a unified system within the OS, instead of by individual apps. We aim to provide a solution that will stoke innovation around mobile personalization. To enable easy application personalization, MoRePriv approximates users' interests using personae such as technophile or business executive. Using a number of case studies and crowd-sourced user studies, we illustrate how more complex personalization tasks can be achieved using MoRePriv. For privacy protection, MoRePriv distills sensitive user information to a coarse-grained profile, which limits the potential damage from information leaks. We see MoRePriv as a way to increase end-user privacy by enabling client-side computing, thus minimizing the need to share user data with the server. As such, MoRePriv shepherds the ecosystem towards a better privacy stance by nudging developers away from today's privacy-violating practices. Furthermore, MoRePriv can be combined with privacy-enhancing technologies and is complimentary to recent advances in data leak detection.","PeriodicalId":104443,"journal":{"name":"Proceedings of the 30th Annual Computer Security Applications Conference","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125576147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cloud infrastructures are designed to share physical resources among many different tenants while ensuring overall security and tenant isolation. The complexity of dynamically changing and growing cloud environments, as well as insider attacks, can lead to misconfigurations that ultimately result in security failures. The detection of these misconfigurations and subsequent failures is a crucial challenge for cloud providers---an insurmountable challenge without tools. We establish an automated security analysis of dynamic virtualized infrastructures that detects misconfigurations and security failures in near real-time. The key is a systematic, differential approach that detects changes in the infrastructure and uses those changes to update its analysis, rather than performing one from scratch. Our system, called Cloud Radar, monitors virtualized infrastructures for changes, updates a graph model representation of the infrastructure, and also maintains a dynamic information flow graph to determine isolation properties. Whereas existing research in this area performs analyses on static snapshots of such infrastructures, our change-based approach yields significant performance improvements as demonstrated with our prototype for VMware environments.
{"title":"Cloud radar: near real-time detection of security failures in dynamic virtualized infrastructures","authors":"Sören Bleikertz, Carsten Vogel, Thomas Gross","doi":"10.1145/2664243.2664274","DOIUrl":"https://doi.org/10.1145/2664243.2664274","url":null,"abstract":"Cloud infrastructures are designed to share physical resources among many different tenants while ensuring overall security and tenant isolation. The complexity of dynamically changing and growing cloud environments, as well as insider attacks, can lead to misconfigurations that ultimately result in security failures. The detection of these misconfigurations and subsequent failures is a crucial challenge for cloud providers---an insurmountable challenge without tools. We establish an automated security analysis of dynamic virtualized infrastructures that detects misconfigurations and security failures in near real-time. The key is a systematic, differential approach that detects changes in the infrastructure and uses those changes to update its analysis, rather than performing one from scratch. Our system, called Cloud Radar, monitors virtualized infrastructures for changes, updates a graph model representation of the infrastructure, and also maintains a dynamic information flow graph to determine isolation properties. Whereas existing research in this area performs analyses on static snapshots of such infrastructures, our change-based approach yields significant performance improvements as demonstrated with our prototype for VMware environments.","PeriodicalId":104443,"journal":{"name":"Proceedings of the 30th Annual Computer Security Applications Conference","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133770183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, through reverse engineering Twitter spammers' tastes (their preferred targets to spam), we aim at providing guidelines for building more effective social honeypots, and generating new insights to defend against social spammers. Specifically, we first perform a measurement study by deploying "benchmark" social honeypots on Twitter with diverse and fine-grained social behavior patterns to trap spammers. After five months' data collection, we make a deep analysis on how Twitter spammers find their targets. Based on the analysis, we evaluate our new guidelines for building effective social honeypots by implementing "advanced" honeypots. Particularly, within the same time period, using those advanced honeypots can trap spammers around 26 times faster than using "traditional" honeypots. In the second part of our study, we investigate new active collection approaches to complement the fundamentally passive procedure of using honeypots to slowly attract spammers. Our goal is that, given limited resources/time, instead of blindly crawling all possible (or randomly sampling) Twitter accounts at the first place (for later spammer analysis), we need a lightweight strategy to prioritize the active crawling/sampling of more likely spam accounts from the huge Twittersphere. Applying what we have learned about the tastes of spammers, we design two new, active and guided sampling approaches for collecting most likely spammer accounts during the crawling. According to our evaluation, our strategies could efficiently crawl/sample over 17,000 spam accounts within a short time with a considerably high "Hit Ratio", i.e., collecting 6 correct spam accounts in every 10 sampled accounts.
{"title":"A taste of tweets: reverse engineering Twitter spammers","authors":"Chao Yang, Jialong Zhang, G. Gu","doi":"10.1145/2664243.2664258","DOIUrl":"https://doi.org/10.1145/2664243.2664258","url":null,"abstract":"In this paper, through reverse engineering Twitter spammers' tastes (their preferred targets to spam), we aim at providing guidelines for building more effective social honeypots, and generating new insights to defend against social spammers. Specifically, we first perform a measurement study by deploying \"benchmark\" social honeypots on Twitter with diverse and fine-grained social behavior patterns to trap spammers. After five months' data collection, we make a deep analysis on how Twitter spammers find their targets. Based on the analysis, we evaluate our new guidelines for building effective social honeypots by implementing \"advanced\" honeypots. Particularly, within the same time period, using those advanced honeypots can trap spammers around 26 times faster than using \"traditional\" honeypots. In the second part of our study, we investigate new active collection approaches to complement the fundamentally passive procedure of using honeypots to slowly attract spammers. Our goal is that, given limited resources/time, instead of blindly crawling all possible (or randomly sampling) Twitter accounts at the first place (for later spammer analysis), we need a lightweight strategy to prioritize the active crawling/sampling of more likely spam accounts from the huge Twittersphere. Applying what we have learned about the tastes of spammers, we design two new, active and guided sampling approaches for collecting most likely spammer accounts during the crawling. According to our evaluation, our strategies could efficiently crawl/sample over 17,000 spam accounts within a short time with a considerably high \"Hit Ratio\", i.e., collecting 6 correct spam accounts in every 10 sampled accounts.","PeriodicalId":104443,"journal":{"name":"Proceedings of the 30th Annual Computer Security Applications Conference","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129399279","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}