This paper introduces two generic mechanisms allowing to compare security policies from a semantical point of view. First, a notion of embedding is defined in order to compare policies over a common domain. Then, interpretations of security policies are introduced in order to consider their properties over arbitrary domains. Thus, combining interpretations and embeddings allows to compare policies expressed over different domains. Along the lines of this paper, we illustrate our definitions by defining a flow-based interpretation of access control and by comparing classical access control policies according to a hierarchy of abstract flow policies, thus characterizing flow properties which can be ensured by access control policies.
{"title":"Semantic Comparison of Security Policies: From Access Control Policies to Flow Properties","authors":"M. Jaume","doi":"10.1109/SPW.2012.33","DOIUrl":"https://doi.org/10.1109/SPW.2012.33","url":null,"abstract":"This paper introduces two generic mechanisms allowing to compare security policies from a semantical point of view. First, a notion of embedding is defined in order to compare policies over a common domain. Then, interpretations of security policies are introduced in order to consider their properties over arbitrary domains. Thus, combining interpretations and embeddings allows to compare policies expressed over different domains. Along the lines of this paper, we illustrate our definitions by defining a flow-based interpretation of access control and by comparing classical access control policies according to a hierarchy of abstract flow policies, thus characterizing flow properties which can be ensured by access control policies.","PeriodicalId":201519,"journal":{"name":"2012 IEEE Symposium on Security and Privacy Workshops","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125415361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Trust based approaches have been widely used to counter insider attacks in wireless sensor networks because traditional cryptography-based security mechanisms such as authentication and authorization are not effective against such attacks. A trust model, which is the core component of a trust mechanism, provides a quantitative way to evaluate the trustworthiness of sensor nodes. The trust evaluation is normally conducted by watchdog nodes, which monitor and collect other sensors' behavior information. Most existing works mainly focus on the design of the trust models and how these models can be used to defend against certain insider attacks. However, these studies are empirical with the implicit assumption that the trust models are secure and reliable. In this paper, we discuss several security vulnerabilities that watchdog and trust mechanisms have, examine how inside attackers can exploit these security holes, and finally propose defending approaches that can mitigate the weaknesses of trust mechanism and watchdog.
{"title":"Insider Threats against Trust Mechanism with Watchdog and Defending Approaches in Wireless Sensor Networks","authors":"Youngho Cho, G. Qu, Yuanming Wu","doi":"10.1109/SPW.2012.32","DOIUrl":"https://doi.org/10.1109/SPW.2012.32","url":null,"abstract":"Trust based approaches have been widely used to counter insider attacks in wireless sensor networks because traditional cryptography-based security mechanisms such as authentication and authorization are not effective against such attacks. A trust model, which is the core component of a trust mechanism, provides a quantitative way to evaluate the trustworthiness of sensor nodes. The trust evaluation is normally conducted by watchdog nodes, which monitor and collect other sensors' behavior information. Most existing works mainly focus on the design of the trust models and how these models can be used to defend against certain insider attacks. However, these studies are empirical with the implicit assumption that the trust models are secure and reliable. In this paper, we discuss several security vulnerabilities that watchdog and trust mechanisms have, examine how inside attackers can exploit these security holes, and finally propose defending approaches that can mitigate the weaknesses of trust mechanism and watchdog.","PeriodicalId":201519,"journal":{"name":"2012 IEEE Symposium on Security and Privacy Workshops","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114490903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Detecting insider attacks continues to prove to be one of the most difficult challenges in securing sensitive data. Decoy information and documents represent a promising approach to detecting malicious masqueraders, however, false positives can interfere with legitimate work and take up user time. We propose generating foreign language decoy documents that are sprinkled with untranslatable enticing proper nouns such as company names, hot topics, or apparent login information. Our goal is for this type of decoy to serve three main purposes. First, using a language that is not used in normal business practice gives real users a clear signal that the document is fake, so they waste less time examining it. Second, an attacker, if enticed, will need to exfiltrate the document's contents in order to translate it, providing a cleaner signal of malicious activity. Third, we consume significant adversarial resources as they must still read the document and decide if it contains valuable information, which is made more difficult as it will be somewhat scrambled through translation. In this paper, we expand upon the rationale behind using foreign language decoys. We present a preliminary evaluation which shows how they significantly increase the cost to attackers in terms of the amount of time that it takes to determine if a document is real and potentially contains valuable information or is entirely bogus, confounding their goal of exfiltrating important sensitive information.
{"title":"Lost in Translation: Improving Decoy Documents via Automated Translation","authors":"Jonathan Voris, Nathaniel Boggs, S. Stolfo","doi":"10.1109/SPW.2012.20","DOIUrl":"https://doi.org/10.1109/SPW.2012.20","url":null,"abstract":"Detecting insider attacks continues to prove to be one of the most difficult challenges in securing sensitive data. Decoy information and documents represent a promising approach to detecting malicious masqueraders, however, false positives can interfere with legitimate work and take up user time. We propose generating foreign language decoy documents that are sprinkled with untranslatable enticing proper nouns such as company names, hot topics, or apparent login information. Our goal is for this type of decoy to serve three main purposes. First, using a language that is not used in normal business practice gives real users a clear signal that the document is fake, so they waste less time examining it. Second, an attacker, if enticed, will need to exfiltrate the document's contents in order to translate it, providing a cleaner signal of malicious activity. Third, we consume significant adversarial resources as they must still read the document and decide if it contains valuable information, which is made more difficult as it will be somewhat scrambled through translation. In this paper, we expand upon the rationale behind using foreign language decoys. We present a preliminary evaluation which shows how they significantly increase the cost to attackers in terms of the amount of time that it takes to determine if a document is real and potentially contains valuable information or is entirely bogus, confounding their goal of exfiltrating important sensitive information.","PeriodicalId":201519,"journal":{"name":"2012 IEEE Symposium on Security and Privacy Workshops","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129655754","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
There is a wealth of sensitive information available on the Web about any individual that is generated either by her or by others on social networking sites. This information could be used to make important decisions about that individual. The problem is that although people know that searches for their personal information are possible, they have no way to either control the data that is put on the Web by others or indicate how they would like to restrict usage of their own data. We describe a framework called Policy Aware Social Miner (PASM) that would provide a solution to these problems by giving users a way to semantically annotate data on the Web using policies to guide how searches about them should be executed. PASM accepts search queries and applies the user's policies on the results. It filters results over data the user owns and provides the user's refutation link on search results that the user does not own. These usage control mechanisms for privacy allow users to break away from siloed data privacy management and have their privacy settings applied to all their data available on the Web.
网络上有大量关于个人的敏感信息,这些信息要么是由个人产生的,要么是由社交网站上的其他人产生的。这些信息可以用来做出关于那个人的重要决定。问题是,尽管人们知道搜索他们的个人信息是可能的,但他们既没有办法控制其他人放到网络上的数据,也没有办法表明他们希望如何限制自己数据的使用。我们描述了一个名为策略感知社会挖掘器(Policy Aware Social Miner, PASM)的框架,该框架将为这些问题提供解决方案,它为用户提供了一种对Web上的数据进行语义注释的方法,使用策略来指导如何执行对它们的搜索。PASM接受搜索查询,并对结果应用用户的策略。它过滤用户拥有的数据的结果,并在用户不拥有的搜索结果上提供用户的反驳链接。这些隐私使用控制机制允许用户摆脱孤立的数据隐私管理,并将其隐私设置应用于Web上可用的所有数据。
{"title":"Policy Aware Social Miner","authors":"Sharon Paradesi, O. Seneviratne, Lalana Kagal","doi":"10.1109/SPW.2012.28","DOIUrl":"https://doi.org/10.1109/SPW.2012.28","url":null,"abstract":"There is a wealth of sensitive information available on the Web about any individual that is generated either by her or by others on social networking sites. This information could be used to make important decisions about that individual. The problem is that although people know that searches for their personal information are possible, they have no way to either control the data that is put on the Web by others or indicate how they would like to restrict usage of their own data. We describe a framework called Policy Aware Social Miner (PASM) that would provide a solution to these problems by giving users a way to semantically annotate data on the Web using policies to guide how searches about them should be executed. PASM accepts search queries and applies the user's policies on the results. It filters results over data the user owns and provides the user's refutation link on search results that the user does not own. These usage control mechanisms for privacy allow users to break away from siloed data privacy management and have their privacy settings applied to all their data available on the Web.","PeriodicalId":201519,"journal":{"name":"2012 IEEE Symposium on Security and Privacy Workshops","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123182282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Oliver Brdiczka, Juan Liu, B. Price, Jianqiang Shen, Akshay Patil, Richard Chow, Eugene Bart, Nicolas Ducheneaut
The annual incidence of insider attacks continues to grow, and there are indications this trend will continue. While there are a number of existing tools that can accurately identify known attacks, these are reactive (as opposed to proactive) in their enforcement, and may be eluded by previously unseen, adversarial behaviors. This paper proposes an approach that combines Structural Anomaly Detection (SA) from social and information networks and Psychological Profiling (PP) of individuals. SA uses technologies including graph analysis, dynamic tracking, and machine learning to detect structural anomalies in large-scale information network data, while PP constructs dynamic psychological profiles from behavioral patterns. Threats are finally identified through a fusion and ranking of outcomes from SA and PP. The proposed approach is illustrated by applying it to a large data set from a massively multi-player online game, World of War craft (WoW). The data set contains behavior traces from over 350,000 characters observed over a period of 6 months. SA is used to predict if and when characters quit their guild (a player association with similarities to a club or workgroup in non-gaming contexts), possibly causing damage to these social groups. PP serves to estimate the five-factor personality model for all characters. Both threads show good results on the gaming data set and thus validate the proposed approach.
{"title":"Proactive Insider Threat Detection through Graph Learning and Psychological Context","authors":"Oliver Brdiczka, Juan Liu, B. Price, Jianqiang Shen, Akshay Patil, Richard Chow, Eugene Bart, Nicolas Ducheneaut","doi":"10.1109/SPW.2012.29","DOIUrl":"https://doi.org/10.1109/SPW.2012.29","url":null,"abstract":"The annual incidence of insider attacks continues to grow, and there are indications this trend will continue. While there are a number of existing tools that can accurately identify known attacks, these are reactive (as opposed to proactive) in their enforcement, and may be eluded by previously unseen, adversarial behaviors. This paper proposes an approach that combines Structural Anomaly Detection (SA) from social and information networks and Psychological Profiling (PP) of individuals. SA uses technologies including graph analysis, dynamic tracking, and machine learning to detect structural anomalies in large-scale information network data, while PP constructs dynamic psychological profiles from behavioral patterns. Threats are finally identified through a fusion and ranking of outcomes from SA and PP. The proposed approach is illustrated by applying it to a large data set from a massively multi-player online game, World of War craft (WoW). The data set contains behavior traces from over 350,000 characters observed over a period of 6 months. SA is used to predict if and when characters quit their guild (a player association with similarities to a club or workgroup in non-gaming contexts), possibly causing damage to these social groups. PP serves to estimate the five-factor personality model for all characters. Both threads show good results on the gaming data set and thus validate the proposed approach.","PeriodicalId":201519,"journal":{"name":"2012 IEEE Symposium on Security and Privacy Workshops","volume":"172 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122074541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kernel-level key loggers, which are installed as part of the operating system (OS) with complete control of kernel code, data and resources, are a growing and very serious threat to the security of current systems. Defending against this type of malware means defending the kernel itself against compromise and it is still an open and difficult problem. This paper details the implementation of two classical kernel-level key loggers for Linux 2.6.38 and how current defense approaches still fail to protect OSes against this type of malware. We further present our current research directions to mitigate this threat by employing an architecture where a guest OS and a virtual machine layer actively collaborate to guarantee kernel integrity. This collaborative approach allows us to better bridge the semantic gap between the OS and architecture layers and devise stronger and more flexible defense solutions to protect the integrity of OS kernels.
{"title":"Bridging the Semantic Gap to Mitigate Kernel-Level Keyloggers","authors":"Jesús Navarro, Enrique Naudon, Daniela Oliveira","doi":"10.1109/SPW.2012.22","DOIUrl":"https://doi.org/10.1109/SPW.2012.22","url":null,"abstract":"Kernel-level key loggers, which are installed as part of the operating system (OS) with complete control of kernel code, data and resources, are a growing and very serious threat to the security of current systems. Defending against this type of malware means defending the kernel itself against compromise and it is still an open and difficult problem. This paper details the implementation of two classical kernel-level key loggers for Linux 2.6.38 and how current defense approaches still fail to protect OSes against this type of malware. We further present our current research directions to mitigate this threat by employing an architecture where a guest OS and a virtual machine layer actively collaborate to guarantee kernel integrity. This collaborative approach allows us to better bridge the semantic gap between the OS and architecture layers and devise stronger and more flexible defense solutions to protect the integrity of OS kernels.","PeriodicalId":201519,"journal":{"name":"2012 IEEE Symposium on Security and Privacy Workshops","volume":"119 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128614003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Physical Unclonable Functions (PUFs) or Physical One Way Functions (P-OWFs) are physical systems whose responses to input stimuli are easy to measure but hard to clone. The unclonability property is due to the accepted hardness of replicating the multitude of uncontrollable manufacturing characteristics and makes PUFs useful in solving problems such as device authentication, software protection and licensing, and certified execution. In this paper, we investigate the effectiveness of PUFs for software protection in hostile offline settings. We show that traditional non-computational (black-box) PUFs cannot solve the software protection problem in this context. We provide two real-world adversary models (weak and strong variants) and security definitions for each. We propose schemes secure against the weak adversary and show that no scheme is secure against a strong adversary without the use of trusted hardware. Finally, we present a protection scheme secure against strong adversaries based on trusted hardware.
{"title":"A Theoretical Analysis: Physical Unclonable Functions and the Software Protection Problem","authors":"Rishab Nithyanand, John Solis","doi":"10.1109/SPW.2012.16","DOIUrl":"https://doi.org/10.1109/SPW.2012.16","url":null,"abstract":"Physical Unclonable Functions (PUFs) or Physical One Way Functions (P-OWFs) are physical systems whose responses to input stimuli are easy to measure but hard to clone. The unclonability property is due to the accepted hardness of replicating the multitude of uncontrollable manufacturing characteristics and makes PUFs useful in solving problems such as device authentication, software protection and licensing, and certified execution. In this paper, we investigate the effectiveness of PUFs for software protection in hostile offline settings. We show that traditional non-computational (black-box) PUFs cannot solve the software protection problem in this context. We provide two real-world adversary models (weak and strong variants) and security definitions for each. We propose schemes secure against the weak adversary and show that no scheme is secure against a strong adversary without the use of trusted hardware. Finally, we present a protection scheme secure against strong adversaries based on trusted hardware.","PeriodicalId":201519,"journal":{"name":"2012 IEEE Symposium on Security and Privacy Workshops","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121677464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Users' mental models of security, though possibly incorrect, embody patterns of reasoning about security that lead to systematic behaviors across tasks and may be shared across populations of users. Researchers have identified widely held mental models of security, usually with the purpose of improving communications and warnings about vulnerabilities. Here, we implement previously identified models in order to explore their use for predicting user behavior. We describe a general approach for implementing the models in agents that simulate human behavior within a network security test bed, and show that the implementations produce behaviors similar to those of users who hold them. The approach is relatively simple for researchers to implement new models within the agent platform to experiment with their effects in a multi-agent setting.
{"title":"Implementing Mental Models","authors":"J. Blythe, L. Camp","doi":"10.1109/SPW.2012.31","DOIUrl":"https://doi.org/10.1109/SPW.2012.31","url":null,"abstract":"Users' mental models of security, though possibly incorrect, embody patterns of reasoning about security that lead to systematic behaviors across tasks and may be shared across populations of users. Researchers have identified widely held mental models of security, usually with the purpose of improving communications and warnings about vulnerabilities. Here, we implement previously identified models in order to explore their use for predicting user behavior. We describe a general approach for implementing the models in agents that simulate human behavior within a network security test bed, and show that the implementations produce behaviors similar to those of users who hold them. The approach is relatively simple for researchers to implement new models within the agent platform to experiment with their effects in a multi-agent setting.","PeriodicalId":201519,"journal":{"name":"2012 IEEE Symposium on Security and Privacy Workshops","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132911623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Phishing constitutes more than half of all reported security incident son the Internet. The attacks cause users to erroneously trust websites and enter sensitive data because the email notifications and the website look familiar. Our hypothesis is that familiarity can be defined formally using history data from the user's computer, and effective presentation of the data can help users distinguishphishing messages from trustworthy messages.
{"title":"Towards a Semantics of Phish","authors":"H. Orman","doi":"10.1109/SPW.2012.12","DOIUrl":"https://doi.org/10.1109/SPW.2012.12","url":null,"abstract":"Phishing constitutes more than half of all reported security incident son the Internet. The attacks cause users to erroneously trust websites and enter sensitive data because the email notifications and the website look familiar. Our hypothesis is that familiarity can be defined formally using history data from the user's computer, and effective presentation of the data can help users distinguishphishing messages from trustworthy messages.","PeriodicalId":201519,"journal":{"name":"2012 IEEE Symposium on Security and Privacy Workshops","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132257641","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The increasing use of online review sites is creating new challenges for user privacy. Although reviews are public, many users inadvertently disclose private information about relationship, location, and temporal attributes to the world. This research protects users of online review sites from the inadvertent disclosure of private information in three ways. First, the types of unstructured and structured information made public by online review sites are characterized and used to grade those sites on their attention to privacy. Second, a privacy-check tool that uses keyword matching and named-entity recognition to annotate potentially sensitive review text is presented. Third, we raise awareness of the privacy threat in online review sites through examples and statistics derived from the privacy-check tool.
{"title":"Privacy in Online Review Sites","authors":"M. Burkholder, R. Greenstadt","doi":"10.1109/SPW.2012.23","DOIUrl":"https://doi.org/10.1109/SPW.2012.23","url":null,"abstract":"The increasing use of online review sites is creating new challenges for user privacy. Although reviews are public, many users inadvertently disclose private information about relationship, location, and temporal attributes to the world. This research protects users of online review sites from the inadvertent disclosure of private information in three ways. First, the types of unstructured and structured information made public by online review sites are characterized and used to grade those sites on their attention to privacy. Second, a privacy-check tool that uses keyword matching and named-entity recognition to annotate potentially sensitive review text is presented. Third, we raise awareness of the privacy threat in online review sites through examples and statistics derived from the privacy-check tool.","PeriodicalId":201519,"journal":{"name":"2012 IEEE Symposium on Security and Privacy Workshops","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127014312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}