Research on keystroke-based authentication has traditionally assumed human impostors who generate forgeries by physically typing on the keyboard. With bots now well understood to have the capacity to originate precisely timed keystroke sequences, this model of attack is likely to underestimate the threat facing a keystroke-based system in practice. In this work, we investigate how a keystroke-based authentication system would perform if it were subjected to synthetic attacks designed to mimic the typical user. To implement the attacks, we perform a rigorous statistical analysis on keystroke biometrics data collected over a 2-year period from more than 3000 users, and then use the observed statistical traits to design and launch algorithmic attacks against three state-of-the-art password-based keystroke verification systems. Relative to the zero-effort attacks typically used to test the performance of keystroke biometric systems, we show that our algorithmic attack increases the mean Equal Error Rates (EERs) of three high performance keystroke verifiers by between 28.6% and 84.4%. We also find that the impact of the attack is more pronounced when the keystroke profiles subjected to the attack are based on shorter strings, and that some users see considerably greater performance degradation under the attack than others. This article calls for a shift from the traditional zero-effort approach of testing the performance of password-based keystroke verifiers, to a more rigorous algorithmic approach that captures the threat posed by today’s bots.
{"title":"Examining a Large Keystroke Biometrics Dataset for Statistical-Attack Openings","authors":"Abdul Serwadda, V. Phoha","doi":"10.1145/2516960","DOIUrl":"https://doi.org/10.1145/2516960","url":null,"abstract":"Research on keystroke-based authentication has traditionally assumed human impostors who generate forgeries by physically typing on the keyboard. With bots now well understood to have the capacity to originate precisely timed keystroke sequences, this model of attack is likely to underestimate the threat facing a keystroke-based system in practice. In this work, we investigate how a keystroke-based authentication system would perform if it were subjected to synthetic attacks designed to mimic the typical user. To implement the attacks, we perform a rigorous statistical analysis on keystroke biometrics data collected over a 2-year period from more than 3000 users, and then use the observed statistical traits to design and launch algorithmic attacks against three state-of-the-art password-based keystroke verification systems.\u0000 Relative to the zero-effort attacks typically used to test the performance of keystroke biometric systems, we show that our algorithmic attack increases the mean Equal Error Rates (EERs) of three high performance keystroke verifiers by between 28.6% and 84.4%. We also find that the impact of the attack is more pronounced when the keystroke profiles subjected to the attack are based on shorter strings, and that some users see considerably greater performance degradation under the attack than others. This article calls for a shift from the traditional zero-effort approach of testing the performance of password-based keystroke verifiers, to a more rigorous algorithmic approach that captures the threat posed by today’s bots.","PeriodicalId":50912,"journal":{"name":"ACM Transactions on Information and System Security","volume":"45 1","pages":"8"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79098695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pieter Philippaerts, Yves Younan, Stijn Muylle, F. Piessens, Sven Lachmund, T. Walter
Code Pointer Masking (CPM) is a novel countermeasure against code injection attacks on native code. By enforcing the correct semantics of code pointers, CPM thwarts attacks that modify code pointers to divert the application’s control flow. It does not rely on secret values such as stack canaries and protects against attacks that are not addressed by state-of-the-art countermeasures of similar performance. This article reports on two prototype implementations on very distinct processor architectures, showing that the idea behind CPM is portable. The evaluation also shows that the overhead of using our countermeasure is very small and the security benefits are substantial.
{"title":"CPM: Masking Code Pointers to Prevent Code Injection Attacks","authors":"Pieter Philippaerts, Yves Younan, Stijn Muylle, F. Piessens, Sven Lachmund, T. Walter","doi":"10.1145/2487222.2487223","DOIUrl":"https://doi.org/10.1145/2487222.2487223","url":null,"abstract":"Code Pointer Masking (CPM) is a novel countermeasure against code injection attacks on native code. By enforcing the correct semantics of code pointers, CPM thwarts attacks that modify code pointers to divert the application’s control flow. It does not rely on secret values such as stack canaries and protects against attacks that are not addressed by state-of-the-art countermeasures of similar performance. This article reports on two prototype implementations on very distinct processor architectures, showing that the idea behind CPM is portable. The evaluation also shows that the overhead of using our countermeasure is very small and the security benefits are substantial.","PeriodicalId":50912,"journal":{"name":"ACM Transactions on Information and System Security","volume":"2 1","pages":"1"},"PeriodicalIF":0.0,"publicationDate":"2013-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73886476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose a generalized framework to evaluate the side-channel information leakage of symmetric block ciphers. The leakage mapping methodology enables the systematic and efficient identification and mitigation of problematic information leakages by exhaustively considering relevant leakage models. The evaluation procedure bounds the anticipated resistance of an implementation to the general class of univariate differential side-channel analysis techniques. Typical applications are demonstrated using the well-known Hamming weight and Hamming distance leakage models, with recommendations for the incorporation of more accurate models. The evaluation results are empirically validated against correlation-based differential side-channel analysis attacks on two typical unprotected implementations of the Advanced Encryption Standard.
{"title":"Leakage Mapping: A Systematic Methodology for Assessing the Side-Channel Information Leakage of Cryptographic Implementations","authors":"William E. Cobb, R. Baldwin, Eric D. Laspe","doi":"10.1145/2487222.2487224","DOIUrl":"https://doi.org/10.1145/2487222.2487224","url":null,"abstract":"We propose a generalized framework to evaluate the side-channel information leakage of symmetric block ciphers. The leakage mapping methodology enables the systematic and efficient identification and mitigation of problematic information leakages by exhaustively considering relevant leakage models. The evaluation procedure bounds the anticipated resistance of an implementation to the general class of univariate differential side-channel analysis techniques. Typical applications are demonstrated using the well-known Hamming weight and Hamming distance leakage models, with recommendations for the incorporation of more accurate models. The evaluation results are empirically validated against correlation-based differential side-channel analysis attacks on two typical unprotected implementations of the Advanced Encryption Standard.","PeriodicalId":50912,"journal":{"name":"ACM Transactions on Information and System Security","volume":"27 1","pages":"2"},"PeriodicalIF":0.0,"publicationDate":"2013-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73926056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Real-time network- and host-based Anomaly Detection Systems (ADSs) transform a continuous stream of input data into meaningful and quantifiable anomaly scores. These scores are subsequently compared to a fixed detection threshold and classified as either benign or malicious. We argue that a real-time ADS’ input changes considerably over time and a fixed threshold value cannot guarantee good anomaly detection accuracy for such a time-varying input. In this article, we propose a simple and generic technique to adaptively tune the detection threshold of any ADS that works on threshold method. To this end, we first perform statistical and information-theoretic analysis of network- and host-based ADSs’ anomaly scores to reveal a consistent time correlation structure during benign activity periods. We model the observed correlation structure using Markov chains, which are in turn used in a stochastic target tracking framework to adapt an ADS’ detection threshold in accordance with real-time measurements. We also use statistical techniques to make the proposed algorithm resilient to sporadic changes and evasion attacks. In order to evaluate the proposed approach, we incorporate the proposed adaptive thresholding module into multiple ADSs and evaluate those ADSs over comprehensive and independently collected network and host attack datasets. We show that, while reducing the need of human threshold configuration, the proposed technique provides considerable and consistent accuracy improvements for all evaluated ADSs.
{"title":"Automated Anomaly Detector Adaptation using Adaptive Threshold Tuning","authors":"M. Ali, E. Al-Shaer, Hassan Khan, S. A. Khayam","doi":"10.1145/2445566.2445569","DOIUrl":"https://doi.org/10.1145/2445566.2445569","url":null,"abstract":"Real-time network- and host-based Anomaly Detection Systems (ADSs) transform a continuous stream of input data into meaningful and quantifiable anomaly scores. These scores are subsequently compared to a fixed detection threshold and classified as either benign or malicious. We argue that a real-time ADS’ input changes considerably over time and a fixed threshold value cannot guarantee good anomaly detection accuracy for such a time-varying input. In this article, we propose a simple and generic technique to adaptively tune the detection threshold of any ADS that works on threshold method. To this end, we first perform statistical and information-theoretic analysis of network- and host-based ADSs’ anomaly scores to reveal a consistent time correlation structure during benign activity periods. We model the observed correlation structure using Markov chains, which are in turn used in a stochastic target tracking framework to adapt an ADS’ detection threshold in accordance with real-time measurements. We also use statistical techniques to make the proposed algorithm resilient to sporadic changes and evasion attacks. In order to evaluate the proposed approach, we incorporate the proposed adaptive thresholding module into multiple ADSs and evaluate those ADSs over comprehensive and independently collected network and host attack datasets. We show that, while reducing the need of human threshold configuration, the proposed technique provides considerable and consistent accuracy improvements for all evaluated ADSs.","PeriodicalId":50912,"journal":{"name":"ACM Transactions on Information and System Security","volume":"50 1","pages":"17:1-17:30"},"PeriodicalIF":0.0,"publicationDate":"2013-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89654021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We show that fragmented IPv4 and IPv6 traffic is vulnerable to effective interception and denial-of-service (DoS) attacks by an off-path attacker. Specifically, we demonstrate a weak attacker intercepting more than 80% of the data between peers and causing over 94% loss rate. We show that our attacks are practical through experimental validation on popular industrial and open-source products, with realistic network setups that involve NAT or tunneling and include concurrent legitimate traffic as well as packet losses. The interception attack requires a zombie agent behind the same NAT or tunnel-gateway as the victim destination; the DoS attack only requires a puppet agent, that is, a sandboxed applet or script running in web-browser context. The complexity of our attacks depends on the predictability of the IP Identification (ID) field which is typically implemented as one or multiple counters, as allowed and recommended by the IP specifications. The attacks are much simpler and more efficient for implementations, such as Windows, which use one ID counter for all destinations. Therefore, much of our focus is on presenting effective attacks for implementations, such as Linux, which use per-destination ID counters. We present practical defenses for the attacks presented in this article, the defenses can be deployed on network firewalls without changes to hosts or operating system kernel.
{"title":"Fragmentation Considered Vulnerable","authors":"Y. Gilad, A. Herzberg","doi":"10.1145/2445566.2445568","DOIUrl":"https://doi.org/10.1145/2445566.2445568","url":null,"abstract":"We show that fragmented IPv4 and IPv6 traffic is vulnerable to effective interception and denial-of-service (DoS) attacks by an off-path attacker. Specifically, we demonstrate a weak attacker intercepting more than 80% of the data between peers and causing over 94% loss rate.\u0000 We show that our attacks are practical through experimental validation on popular industrial and open-source products, with realistic network setups that involve NAT or tunneling and include concurrent legitimate traffic as well as packet losses. The interception attack requires a zombie agent behind the same NAT or tunnel-gateway as the victim destination; the DoS attack only requires a puppet agent, that is, a sandboxed applet or script running in web-browser context.\u0000 The complexity of our attacks depends on the predictability of the IP Identification (ID) field which is typically implemented as one or multiple counters, as allowed and recommended by the IP specifications. The attacks are much simpler and more efficient for implementations, such as Windows, which use one ID counter for all destinations. Therefore, much of our focus is on presenting effective attacks for implementations, such as Linux, which use per-destination ID counters.\u0000 We present practical defenses for the attacks presented in this article, the defenses can be deployed on network firewalls without changes to hosts or operating system kernel.","PeriodicalId":50912,"journal":{"name":"ACM Transactions on Information and System Security","volume":"2018 1","pages":"16:1-16:31"},"PeriodicalIF":0.0,"publicationDate":"2013-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73669292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
K. Jayaraman, Mahesh V. Tripunitara, Vijay Ganesh, M. Rinard, S. Chapin
Verifying that access-control systems maintain desired security properties is recognized as an important problem in security. Enterprise access-control systems have grown to protect tens of thousands of resources, and there is a need for verification to scale commensurately. We present techniques for abstraction-refinement and bound-estimation for bounded model checkers to automatically find errors in Administrative Role-Based Access Control (ARBAC) security policies. ARBAC is the first and most comprehensive administrative scheme for Role-Based Access Control (RBAC) systems. In the abstraction-refinement portion of our approach, we identify and discard roles that are unlikely to be relevant to the verification question (the abstraction step). We then restore such abstracted roles incrementally (the refinement steps). In the bound-estimation portion of our approach, we lower the estimate of the diameter of the reachability graph from the worst-case by recognizing relationships between roles and state-change rules. Our techniques complement one another, and are used with conventional bounded model checking. Our approach is sound and complete: an error is found if and only if it exists. We have implemented our technique in an access-control policy analysis tool called Mohawk. We show empirically that Mohawk scales well to realistic policies, and provide a comparison with prior tools.
{"title":"Mohawk: Abstraction-Refinement and Bound-Estimation for Verifying Access Control Policies","authors":"K. Jayaraman, Mahesh V. Tripunitara, Vijay Ganesh, M. Rinard, S. Chapin","doi":"10.1145/2445566.2445570","DOIUrl":"https://doi.org/10.1145/2445566.2445570","url":null,"abstract":"Verifying that access-control systems maintain desired security properties is recognized as an important problem in security. Enterprise access-control systems have grown to protect tens of thousands of resources, and there is a need for verification to scale commensurately. We present techniques for abstraction-refinement and bound-estimation for bounded model checkers to automatically find errors in Administrative Role-Based Access Control (ARBAC) security policies. ARBAC is the first and most comprehensive administrative scheme for Role-Based Access Control (RBAC) systems. In the abstraction-refinement portion of our approach, we identify and discard roles that are unlikely to be relevant to the verification question (the abstraction step). We then restore such abstracted roles incrementally (the refinement steps). In the bound-estimation portion of our approach, we lower the estimate of the diameter of the reachability graph from the worst-case by recognizing relationships between roles and state-change rules. Our techniques complement one another, and are used with conventional bounded model checking. Our approach is sound and complete: an error is found if and only if it exists. We have implemented our technique in an access-control policy analysis tool called Mohawk. We show empirically that Mohawk scales well to realistic policies, and provide a comparison with prior tools.","PeriodicalId":50912,"journal":{"name":"ACM Transactions on Information and System Security","volume":"19 1","pages":"18:1-18:28"},"PeriodicalIF":0.0,"publicationDate":"2013-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82150060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
(U.S.) Rule-based policies for mitigating software risk suggest using the CVSS score to measure the risk of an individual vulnerability and act accordingly. A key issue is whether the ‘danger’ score does actually match the risk of exploitation in the wild, and if and how such a score could be improved. To address this question, we propose using a case-control study methodology similar to the procedure used to link lung cancer and smoking in the 1950s. A case-control study allows the researcher to draw conclusions on the relation between some risk factor (e.g., smoking) and an effect (e.g., cancer) by looking backward at the cases (e.g., patients) and comparing them with controls (e.g., randomly selected patients with similar characteristics). The methodology allows us to quantify the risk reduction achievable by acting on the risk factor. We illustrate the methodology by using publicly available data on vulnerabilities, exploits, and exploits in the wild to (1) evaluate the performances of the current risk factor in the industry, the CVSS base score; (2) determine whether it can be improved by considering additional factors such the existence of a proof-of-concept exploit, or of an exploit in the black markets. Our analysis reveals that (a) fixing a vulnerability just because it was assigned a high CVSS score is equivalent to randomly picking vulnerabilities to fix; (b) the existence of proof-of-concept exploits is a significantly better risk factor; (c) fixing in response to exploit presence in black markets yields the largest risk reduction.
{"title":"Comparing Vulnerability Severity and Exploits Using Case-Control Studies","authors":"Luca Allodi, F. Massacci","doi":"10.1145/2630069","DOIUrl":"https://doi.org/10.1145/2630069","url":null,"abstract":"(U.S.) Rule-based policies for mitigating software risk suggest using the CVSS score to measure the risk of an individual vulnerability and act accordingly. A key issue is whether the ‘danger’ score does actually match the risk of exploitation in the wild, and if and how such a score could be improved. To address this question, we propose using a case-control study methodology similar to the procedure used to link lung cancer and smoking in the 1950s. A case-control study allows the researcher to draw conclusions on the relation between some risk factor (e.g., smoking) and an effect (e.g., cancer) by looking backward at the cases (e.g., patients) and comparing them with controls (e.g., randomly selected patients with similar characteristics). The methodology allows us to quantify the risk reduction achievable by acting on the risk factor. We illustrate the methodology by using publicly available data on vulnerabilities, exploits, and exploits in the wild to (1) evaluate the performances of the current risk factor in the industry, the CVSS base score; (2) determine whether it can be improved by considering additional factors such the existence of a proof-of-concept exploit, or of an exploit in the black markets. Our analysis reveals that (a) fixing a vulnerability just because it was assigned a high CVSS score is equivalent to randomly picking vulnerabilities to fix; (b) the existence of proof-of-concept exploits is a significantly better risk factor; (c) fixing in response to exploit presence in black markets yields the largest risk reduction.","PeriodicalId":50912,"journal":{"name":"ACM Transactions on Information and System Security","volume":"46 1","pages":"1:1-1:20"},"PeriodicalIF":0.0,"publicationDate":"2013-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85142576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Role mining tackles the problem of finding a role-based access control (RBAC) configuration, given an access-control matrix assigning users to access permissions as input. Most role-mining approaches work by constructing a large set of candidate roles and use a greedy selection strategy to iteratively pick a small subset such that the differences between the resulting RBAC configuration and the access control matrix are minimized. In this article, we advocate an alternative approach that recasts role mining as an inference problem rather than a lossy compression problem. Instead of using combinatorial algorithms to minimize the number of roles needed to represent the access-control matrix, we derive probabilistic models to learn the RBAC configuration that most likely underlies the given matrix. Our models are generative in that they reflect the way that permissions are assigned to users in a given RBAC configuration. We additionally model how user-permission assignments that conflict with an RBAC configuration emerge and we investigate the influence of constraints on role hierarchies and on the number of assignments. In experiments with access-control matrices from real-world enterprises, we compare our proposed models with other role-mining methods. Our results show that our probabilistic models infer roles that generalize well to new system users for a wide variety of data, while other models’ generalization abilities depend on the dataset given.
{"title":"Role Mining with Probabilistic Models","authors":"Mario Frank, J. Buhmann, D. Basin","doi":"10.1145/2445566.2445567","DOIUrl":"https://doi.org/10.1145/2445566.2445567","url":null,"abstract":"Role mining tackles the problem of finding a role-based access control (RBAC) configuration, given an access-control matrix assigning users to access permissions as input. Most role-mining approaches work by constructing a large set of candidate roles and use a greedy selection strategy to iteratively pick a small subset such that the differences between the resulting RBAC configuration and the access control matrix are minimized. In this article, we advocate an alternative approach that recasts role mining as an inference problem rather than a lossy compression problem. Instead of using combinatorial algorithms to minimize the number of roles needed to represent the access-control matrix, we derive probabilistic models to learn the RBAC configuration that most likely underlies the given matrix.\u0000 Our models are generative in that they reflect the way that permissions are assigned to users in a given RBAC configuration. We additionally model how user-permission assignments that conflict with an RBAC configuration emerge and we investigate the influence of constraints on role hierarchies and on the number of assignments. In experiments with access-control matrices from real-world enterprises, we compare our proposed models with other role-mining methods. Our results show that our probabilistic models infer roles that generalize well to new system users for a wide variety of data, while other models’ generalization abilities depend on the dataset given.","PeriodicalId":50912,"journal":{"name":"ACM Transactions on Information and System Security","volume":"93 1","pages":"15:1-15:28"},"PeriodicalIF":0.0,"publicationDate":"2012-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83847974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The use of stylometry, authorship recognition through purely linguistic means, has contributed to literary, historical, and criminal investigation breakthroughs. Existing stylometry research assumes that authors have not attempted to disguise their linguistic writing style. We challenge this basic assumption of existing stylometry methodologies and present a new area of research: adversarial stylometry. Adversaries have a devastating effect on the robustness of existing classification methods. Our work presents a framework for creating adversarial passages including obfuscation, where a subject attempts to hide her identity, and imitation, where a subject attempts to frame another subject by imitating his writing style, and translation where original passages are obfuscated with machine translation services. This research demonstrates that manual circumvention methods work very well while automated translation methods are not effective. The obfuscation method reduces the techniques' effectiveness to the level of random guessing and the imitation attempts succeed up to 67% of the time depending on the stylometry technique used. These results are more significant given the fact that experimental subjects were unfamiliar with stylometry, were not professional writers, and spent little time on the attacks. This article also contributes to the field by using human subjects to empirically validate the claim of high accuracy for four current techniques (without adversaries). We have also compiled and released two corpora of adversarial stylometry texts to promote research in this field with a total of 57 unique authors. We argue that this field is important to a multidisciplinary approach to privacy, security, and anonymity.
{"title":"Adversarial stylometry: Circumventing authorship recognition to preserve privacy and anonymity","authors":"Michael Brennan, Sadia Afroz, R. Greenstadt","doi":"10.1145/2382448.2382450","DOIUrl":"https://doi.org/10.1145/2382448.2382450","url":null,"abstract":"The use of stylometry, authorship recognition through purely linguistic means, has contributed to literary, historical, and criminal investigation breakthroughs. Existing stylometry research assumes that authors have not attempted to disguise their linguistic writing style. We challenge this basic assumption of existing stylometry methodologies and present a new area of research: adversarial stylometry. Adversaries have a devastating effect on the robustness of existing classification methods. Our work presents a framework for creating adversarial passages including obfuscation, where a subject attempts to hide her identity, and imitation, where a subject attempts to frame another subject by imitating his writing style, and translation where original passages are obfuscated with machine translation services. This research demonstrates that manual circumvention methods work very well while automated translation methods are not effective. The obfuscation method reduces the techniques' effectiveness to the level of random guessing and the imitation attempts succeed up to 67% of the time depending on the stylometry technique used. These results are more significant given the fact that experimental subjects were unfamiliar with stylometry, were not professional writers, and spent little time on the attacks. This article also contributes to the field by using human subjects to empirically validate the claim of high accuracy for four current techniques (without adversaries). We have also compiled and released two corpora of adversarial stylometry texts to promote research in this field with a total of 57 unique authors. We argue that this field is important to a multidisciplinary approach to privacy, security, and anonymity.","PeriodicalId":50912,"journal":{"name":"ACM Transactions on Information and System Security","volume":"31 1","pages":"12:1-12:22"},"PeriodicalIF":0.0,"publicationDate":"2012-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75012298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Audit logs are an integral part of modern computer systems due to their forensic value. Protecting audit logs on a physically unprotected machine in hostile environments is a challenging task, especially in the presence of active adversaries. It is critical for such a system to have forward security and append-only properties such that when an adversary compromises a logging machine, she cannot forge or selectively delete the log entries accumulated before the compromise. Existing public-key-based secure logging schemes are computationally costly. Existing symmetric secure logging schemes are not publicly verifiable and open to certain attacks. In this article, we develop a new forward-secure and aggregate signature scheme called Blind-Aggregate-Forward (BAF), which is suitable for secure logging in resource-constrained systems. BAF is the only cryptographic secure logging scheme that can produce publicly verifiable, forward-secure and aggregate signatures with low computation, key/signature storage, and signature communication overheads for the loggers, without requiring any online trusted third party support. A simple variant of BAF also allows a fine-grained verification of log entries without compromising the security or computational efficiency of BAF. We prove that our schemes are secure in Random Oracle Model (ROM). We also show that they are significantly more efficient than all the previous publicly verifiable cryptographic secure logging schemes.
{"title":"BAF and FI-BAF: Efficient and Publicly Verifiable Cryptographic Schemes for Secure Logging in Resource-Constrained Systems","authors":"A. Yavuz, P. Ning, M. Reiter","doi":"10.1145/2240276.2240280","DOIUrl":"https://doi.org/10.1145/2240276.2240280","url":null,"abstract":"Audit logs are an integral part of modern computer systems due to their forensic value. Protecting audit logs on a physically unprotected machine in hostile environments is a challenging task, especially in the presence of active adversaries. It is critical for such a system to have forward security and append-only properties such that when an adversary compromises a logging machine, she cannot forge or selectively delete the log entries accumulated before the compromise. Existing public-key-based secure logging schemes are computationally costly. Existing symmetric secure logging schemes are not publicly verifiable and open to certain attacks.\u0000 In this article, we develop a new forward-secure and aggregate signature scheme called Blind-Aggregate-Forward (BAF), which is suitable for secure logging in resource-constrained systems. BAF is the only cryptographic secure logging scheme that can produce publicly verifiable, forward-secure and aggregate signatures with low computation, key/signature storage, and signature communication overheads for the loggers, without requiring any online trusted third party support. A simple variant of BAF also allows a fine-grained verification of log entries without compromising the security or computational efficiency of BAF. We prove that our schemes are secure in Random Oracle Model (ROM). We also show that they are significantly more efficient than all the previous publicly verifiable cryptographic secure logging schemes.","PeriodicalId":50912,"journal":{"name":"ACM Transactions on Information and System Security","volume":"78 1","pages":"9:1-9:28"},"PeriodicalIF":0.0,"publicationDate":"2012-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84669651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}