Biometric authentication verifies a user based on its inherent, unique characteristics—who you are. In addition to physiological biometrics, behavioral biometrics has proven very useful in authenticating a user. Mouse dynamics, with their unique patterns of mouse movements, is one such behavioral biometric. In this article, we present a user verification system using mouse dynamics, which is transparent to users and can be naturally applied for continuous reauthentication. The key feature of our system lies in using much more fine-grained (point-by-point) angle-based metrics of mouse movements for user verification. These new metrics are relatively unique from person to person and independent of a computing platform. Moreover, we utilize support vector machines (SVMs) for quick and accurate classification. Our technique is robust across different operating platforms, and no specialized hardware is required. The efficacy of our approach is validated through a series of experiments, which are based on three sets of user mouse movement data collected in controllable environments and in the field. Our experimental results show that the proposed system can verify a user in an accurate and timely manner, with minor induced system overhead.
{"title":"An Efficient User Verification System Using Angle-Based Mouse Movement Biometrics","authors":"Nan Zheng, Aaron Paloski, Haining Wang","doi":"10.1145/2893185","DOIUrl":"https://doi.org/10.1145/2893185","url":null,"abstract":"Biometric authentication verifies a user based on its inherent, unique characteristics—who you are. In addition to physiological biometrics, behavioral biometrics has proven very useful in authenticating a user. Mouse dynamics, with their unique patterns of mouse movements, is one such behavioral biometric. In this article, we present a user verification system using mouse dynamics, which is transparent to users and can be naturally applied for continuous reauthentication. The key feature of our system lies in using much more fine-grained (point-by-point) angle-based metrics of mouse movements for user verification. These new metrics are relatively unique from person to person and independent of a computing platform. Moreover, we utilize support vector machines (SVMs) for quick and accurate classification. Our technique is robust across different operating platforms, and no specialized hardware is required. The efficacy of our approach is validated through a series of experiments, which are based on three sets of user mouse movement data collected in controllable environments and in the field. Our experimental results show that the proposed system can verify a user in an accurate and timely manner, with minor induced system overhead.","PeriodicalId":50912,"journal":{"name":"ACM Transactions on Information and System Security","volume":"57 1","pages":"11:1-11:27"},"PeriodicalIF":0.0,"publicationDate":"2016-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74389496","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Aggregator-oblivious encryption is a useful notion put forward by Shi et al. in 2011 that allows an untrusted aggregator to periodically compute an aggregate value over encrypted data contributed by a set of users. Such encryption schemes find numerous applications, particularly in the context of privacy-preserving smart metering. This article presents a general framework for constructing privacy-preserving aggregator-oblivious encryption schemes using a variant of Cramer-Shoup’s paradigm of smooth projective hashing. This abstraction leads to new schemes based on a variety of complexity assumptions. It also improves upon existing constructions, providing schemes with shorter ciphertexts and better encryption times.
{"title":"A New Framework for Privacy-Preserving Aggregation of Time-Series Data","authors":"Fabrice Benhamouda, M. Joye, Benoît Libert","doi":"10.1145/2873069","DOIUrl":"https://doi.org/10.1145/2873069","url":null,"abstract":"Aggregator-oblivious encryption is a useful notion put forward by Shi et al. in 2011 that allows an untrusted aggregator to periodically compute an aggregate value over encrypted data contributed by a set of users. Such encryption schemes find numerous applications, particularly in the context of privacy-preserving smart metering.\u0000 This article presents a general framework for constructing privacy-preserving aggregator-oblivious encryption schemes using a variant of Cramer-Shoup’s paradigm of smooth projective hashing. This abstraction leads to new schemes based on a variety of complexity assumptions. It also improves upon existing constructions, providing schemes with shorter ciphertexts and better encryption times.","PeriodicalId":50912,"journal":{"name":"ACM Transactions on Information and System Security","volume":"49 1","pages":"10:1-10:21"},"PeriodicalIF":0.0,"publicationDate":"2016-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78244837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Shabtai, Maya Bercovitch, L. Rokach, Y. Gal, Y. Elovici, E. Shmueli
Active honeytokens are fake digital data objects planted among real data objects and used in an attempt to detect data misuse by insiders. In this article, we are interested in understanding how users (e.g., employees) behave when interacting with honeytokens, specifically addressing the following questions: Can users distinguish genuine data objects from honeytokens? And, how does the user's behavior and tendency to misuse data change when he or she is aware of the use of honeytokens? First, we present an automated and generic method for generating the honeytokens that are used in the subsequent behavioral studies. The results of the first study indicate that it is possible to automatically generate honeytokens that are difficult for users to distinguish from real tokens. The results of the second study unexpectedly show that users did not behave differently when informed in advance that honeytokens were planted in the database and that these honeytokens would be monitored to detect illegitimate behavior. These results can inform security system designers about the type of environmental variables that affect people's data misuse behavior and how to generate honeytokens that evade detection.
{"title":"Behavioral Study of Users When Interacting with Active Honeytokens","authors":"A. Shabtai, Maya Bercovitch, L. Rokach, Y. Gal, Y. Elovici, E. Shmueli","doi":"10.1145/2854152","DOIUrl":"https://doi.org/10.1145/2854152","url":null,"abstract":"Active honeytokens are fake digital data objects planted among real data objects and used in an attempt to detect data misuse by insiders. In this article, we are interested in understanding how users (e.g., employees) behave when interacting with honeytokens, specifically addressing the following questions: Can users distinguish genuine data objects from honeytokens? And, how does the user's behavior and tendency to misuse data change when he or she is aware of the use of honeytokens? First, we present an automated and generic method for generating the honeytokens that are used in the subsequent behavioral studies. The results of the first study indicate that it is possible to automatically generate honeytokens that are difficult for users to distinguish from real tokens. The results of the second study unexpectedly show that users did not behave differently when informed in advance that honeytokens were planted in the database and that these honeytokens would be monitored to detect illegitimate behavior. These results can inform security system designers about the type of environmental variables that affect people's data misuse behavior and how to generate honeytokens that evade detection.","PeriodicalId":50912,"journal":{"name":"ACM Transactions on Information and System Security","volume":"50 1","pages":"9:1-9:21"},"PeriodicalIF":0.0,"publicationDate":"2016-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85220305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
P. Alexander, Lee Pike, Peter Loscocco, George Coker
This work examines the use of model checking techniques to verify system-level security properties of a collection of interacting virtual machines. Specifically, we examine how local access control policies implemented in individual virtual machines and a hypervisor can be shown to satisfy global access control constraints. The SAL model checker is used to model and verify a collection of stateful domains with protected resources and local MAC policies attempting to access needed resources from other domains. The model is described along with verification conditions. The need to control state-space explosion is motivated and techniques for writing theorems and limiting domains explored. Finally, analysis results are examined along with analysis complexity.
{"title":"Model Checking Distributed Mandatory Access Control Policies","authors":"P. Alexander, Lee Pike, Peter Loscocco, George Coker","doi":"10.1145/2785966","DOIUrl":"https://doi.org/10.1145/2785966","url":null,"abstract":"This work examines the use of model checking techniques to verify system-level security properties of a collection of interacting virtual machines. Specifically, we examine how local access control policies implemented in individual virtual machines and a hypervisor can be shown to satisfy global access control constraints. The SAL model checker is used to model and verify a collection of stateful domains with protected resources and local MAC policies attempting to access needed resources from other domains. The model is described along with verification conditions. The need to control state-space explosion is motivated and techniques for writing theorems and limiting domains explored. Finally, analysis results are examined along with analysis complexity.","PeriodicalId":50912,"journal":{"name":"ACM Transactions on Information and System Security","volume":"310 1","pages":"6:1-6:25"},"PeriodicalIF":0.0,"publicationDate":"2015-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78265063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Smart grid deployment initiatives have been witnessed in recent years. Smart grids provide bidirectional communication between meters and head-end systems through Advanced Metering Infrastructure (AMI). Recent studies highlight the threats targeting AMI. Despite the need for tailored Intrusion Detection Systems (IDSs) for smart grids, very limited progress has been made in this area. Unlike traditional networks, smart grids have their own unique challenges, such as limited computational power devices and potentially high deployment cost, that restrict the deployment options of intrusion detectors. We show that smart grids exhibit deterministic and predictable behavior that can be accurately modeled to detect intrusion. However, it can also be leveraged by the attackers to launch evasion attacks. To this end, in this article, we present a robust mutation-based intrusion detection system that makes the behavior unpredictable for the attacker while keeping it deterministic for the system. We model the AMI behavior using event logs collected at smart collectors, which in turn can be verified using the invariant specifications generated from the AMI behavior and mutable configuration. Event logs are modeled using fourth-order Markov chain and specifications are written in Linear Temporal Logic (LTL). To counter evasion and mimicry attacks, we propose a configuration randomization module. The approach provides robustness against evasion and mimicry attacks; however, we discuss that it still can be evaded to a certain extent. We validate our approach on a real-world dataset of thousands of meters collected at the AMI of a leading utility provider.
{"title":"Randomization-Based Intrusion Detection System for Advanced Metering Infrastructure*","authors":"M. Ali, E. Al-Shaer","doi":"10.1145/2814936","DOIUrl":"https://doi.org/10.1145/2814936","url":null,"abstract":"Smart grid deployment initiatives have been witnessed in recent years. Smart grids provide bidirectional communication between meters and head-end systems through Advanced Metering Infrastructure (AMI). Recent studies highlight the threats targeting AMI. Despite the need for tailored Intrusion Detection Systems (IDSs) for smart grids, very limited progress has been made in this area. Unlike traditional networks, smart grids have their own unique challenges, such as limited computational power devices and potentially high deployment cost, that restrict the deployment options of intrusion detectors. We show that smart grids exhibit deterministic and predictable behavior that can be accurately modeled to detect intrusion. However, it can also be leveraged by the attackers to launch evasion attacks. To this end, in this article, we present a robust mutation-based intrusion detection system that makes the behavior unpredictable for the attacker while keeping it deterministic for the system. We model the AMI behavior using event logs collected at smart collectors, which in turn can be verified using the invariant specifications generated from the AMI behavior and mutable configuration. Event logs are modeled using fourth-order Markov chain and specifications are written in Linear Temporal Logic (LTL). To counter evasion and mimicry attacks, we propose a configuration randomization module. The approach provides robustness against evasion and mimicry attacks; however, we discuss that it still can be evaded to a certain extent. We validate our approach on a real-world dataset of thousands of meters collected at the AMI of a leading utility provider.","PeriodicalId":50912,"journal":{"name":"ACM Transactions on Information and System Security","volume":"47 1","pages":"7:1-7:30"},"PeriodicalIF":0.0,"publicationDate":"2015-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79113015","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rui Tan, V. Krishna, David K. Y. Yau, Z. Kalbarczyk
Modern information and communication technologies used by electric power grids are subject to cyber-security threats. This article studies the impact of integrity attacks on real-time pricing (RTP), an emerging feature of advanced power grids that can improve system efficiency. Recent studies have shown that RTP creates a closed loop formed by the mutually dependent real-time price signals and price-taking demand. Such a closed loop can be exploited by an adversary whose objective is to destabilize the pricing system. Specifically, small malicious modifications to the price signals can be iteratively amplified by the closed loop, causing highly volatile prices, fluctuating power demand, and increased system operating cost. This article adopts a control-theoretic approach to deriving the fundamental conditions of RTP stability under basic demand, supply, and RTP models that characterize the essential behaviors of consumers, suppliers, and system operators, as well as two broad classes of integrity attacks, namely, the scaling and delay attacks. We show that, under an approximated linear time-invariant formulation, the RTP system is at risk of being destabilized only if the adversary can compromise the price signals advertised to consumers, by either reducing their values in the scaling attack or providing old prices to over half of all consumers in the delay attack. The results provide useful guidelines for system operators to analyze the impact of various attack parameters on system stability so that they may take adequate measures to secure RTP systems.
{"title":"Integrity Attacks on Real-Time Pricing in Electric Power Grids","authors":"Rui Tan, V. Krishna, David K. Y. Yau, Z. Kalbarczyk","doi":"10.1145/2790298","DOIUrl":"https://doi.org/10.1145/2790298","url":null,"abstract":"Modern information and communication technologies used by electric power grids are subject to cyber-security threats. This article studies the impact of integrity attacks on real-time pricing (RTP), an emerging feature of advanced power grids that can improve system efficiency. Recent studies have shown that RTP creates a closed loop formed by the mutually dependent real-time price signals and price-taking demand. Such a closed loop can be exploited by an adversary whose objective is to destabilize the pricing system. Specifically, small malicious modifications to the price signals can be iteratively amplified by the closed loop, causing highly volatile prices, fluctuating power demand, and increased system operating cost. This article adopts a control-theoretic approach to deriving the fundamental conditions of RTP stability under basic demand, supply, and RTP models that characterize the essential behaviors of consumers, suppliers, and system operators, as well as two broad classes of integrity attacks, namely, the scaling and delay attacks. We show that, under an approximated linear time-invariant formulation, the RTP system is at risk of being destabilized only if the adversary can compromise the price signals advertised to consumers, by either reducing their values in the scaling attack or providing old prices to over half of all consumers in the delay attack. The results provide useful guidelines for system operators to analyze the impact of various attack parameters on system stability so that they may take adequate measures to secure RTP systems.","PeriodicalId":50912,"journal":{"name":"ACM Transactions on Information and System Security","volume":"68 1","pages":"5:1-5:33"},"PeriodicalIF":0.0,"publicationDate":"2015-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81872422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Passwords are ubiquitous in our daily digital lives. They protect various types of assets ranging from a simple account on an online newspaper website to our health information on government websites. However, due to the inherent value they protect, attackers have developed insights into cracking/guessing passwords both offline and online. In many cases, users are forced to choose stronger passwords to comply with password policies; such policies are known to alienate users and do not significantly improve password quality. Another solution is to put in place proactive password-strength meters/checkers to give feedback to users while they create new passwords. Millions of users are now exposed to these meters on highly popular web services that use user-chosen passwords for authentication. More recently, these meters are also being built into popular password managers, which protect several user secrets including passwords. Recent studies have found evidence that some meters actually guide users to choose better passwords—which is a rare bit of good news in password research. However, these meters are mostly based on ad hoc design. At least, as we found, most vendors do not provide any explanation for their design choices, sometimes making them appear as a black box. We analyze password meters deployed in selected popular websites and password managers. We document obfuscated source-available meters, infer the algorithm behind the closed-source ones, and measure the strength labels assigned to common passwords from several password dictionaries. From this empirical analysis with millions of passwords, we shed light on how the server end of some web service meters functions and provide examples of highly inconsistent strength outcomes for the same password in different meters, along with examples of many weak passwords being labeled as strong or even excellent. These weaknesses and inconsistencies may confuse users in choosing a stronger password, and thus may weaken the purpose of these meters. On the other hand, we believe these findings may help improve existing meters and possibly make them an effective tool in the long run.
{"title":"A Large-Scale Evaluation of High-Impact Password Strength Meters","authors":"Xavier de Carné de Carnavalet, Mohammad Mannan","doi":"10.1145/2739044","DOIUrl":"https://doi.org/10.1145/2739044","url":null,"abstract":"Passwords are ubiquitous in our daily digital lives. They protect various types of assets ranging from a simple account on an online newspaper website to our health information on government websites. However, due to the inherent value they protect, attackers have developed insights into cracking/guessing passwords both offline and online. In many cases, users are forced to choose stronger passwords to comply with password policies; such policies are known to alienate users and do not significantly improve password quality. Another solution is to put in place proactive password-strength meters/checkers to give feedback to users while they create new passwords. Millions of users are now exposed to these meters on highly popular web services that use user-chosen passwords for authentication. More recently, these meters are also being built into popular password managers, which protect several user secrets including passwords. Recent studies have found evidence that some meters actually guide users to choose better passwords—which is a rare bit of good news in password research. However, these meters are mostly based on ad hoc design. At least, as we found, most vendors do not provide any explanation for their design choices, sometimes making them appear as a black box. We analyze password meters deployed in selected popular websites and password managers. We document obfuscated source-available meters, infer the algorithm behind the closed-source ones, and measure the strength labels assigned to common passwords from several password dictionaries. From this empirical analysis with millions of passwords, we shed light on how the server end of some web service meters functions and provide examples of highly inconsistent strength outcomes for the same password in different meters, along with examples of many weak passwords being labeled as strong or even excellent. These weaknesses and inconsistencies may confuse users in choosing a stronger password, and thus may weaken the purpose of these meters. On the other hand, we believe these findings may help improve existing meters and possibly make them an effective tool in the long run.","PeriodicalId":50912,"journal":{"name":"ACM Transactions on Information and System Security","volume":"98 1","pages":"1:1-1:32"},"PeriodicalIF":0.0,"publicationDate":"2015-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76503849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ghassan O. Karame, Elli Androulaki, Marc Roeschlin, Arthur Gervais, Srdjan Capkun
Bitcoin is a decentralized payment system that relies on Proof-of-Work (PoW) to resist double-spending through a distributed timestamping service. To ensure the operation and security of Bitcoin, it is essential that all transactions and their order of execution are available to all Bitcoin users. Unavoidably, in such a setting, the security of transactions comes at odds with transaction privacy. Motivated by the fact that transaction confirmation in Bitcoin requires tens of minutes, we analyze the conditions for performing successful double-spending attacks against fast payments in Bitcoin, where the time between the exchange of currency and goods is short (in the order of a minute). We show that unless new detection techniques are integrated in the Bitcoin implementation, double-spending attacks on fast payments succeed with considerable probability and can be mounted at low cost. We propose a new and lightweight countermeasure that enables the detection of double-spending attacks in fast transactions. In light of such misbehavior, accountability becomes crucial. We show that in the specific case of Bitcoin, accountability complements privacy. To illustrate this tension, we provide accountability and privacy definition for Bitcoin, and we investigate analytically and empirically the privacy and accountability provisions in Bitcoin.
{"title":"Misbehavior in Bitcoin: A Study of Double-Spending and Accountability","authors":"Ghassan O. Karame, Elli Androulaki, Marc Roeschlin, Arthur Gervais, Srdjan Capkun","doi":"10.1145/2732196","DOIUrl":"https://doi.org/10.1145/2732196","url":null,"abstract":"Bitcoin is a decentralized payment system that relies on Proof-of-Work (PoW) to resist double-spending through a distributed timestamping service. To ensure the operation and security of Bitcoin, it is essential that all transactions and their order of execution are available to all Bitcoin users.\u0000 Unavoidably, in such a setting, the security of transactions comes at odds with transaction privacy. Motivated by the fact that transaction confirmation in Bitcoin requires tens of minutes, we analyze the conditions for performing successful double-spending attacks against fast payments in Bitcoin, where the time between the exchange of currency and goods is short (in the order of a minute). We show that unless new detection techniques are integrated in the Bitcoin implementation, double-spending attacks on fast payments succeed with considerable probability and can be mounted at low cost. We propose a new and lightweight countermeasure that enables the detection of double-spending attacks in fast transactions.\u0000 In light of such misbehavior, accountability becomes crucial. We show that in the specific case of Bitcoin, accountability complements privacy. To illustrate this tension, we provide accountability and privacy definition for Bitcoin, and we investigate analytically and empirically the privacy and accountability provisions in Bitcoin.","PeriodicalId":50912,"journal":{"name":"ACM Transactions on Information and System Security","volume":"16 1","pages":"2:1-2:32"},"PeriodicalIF":0.0,"publicationDate":"2015-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79225772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Edoardo Serra, S. Jajodia, Andrea Pugliese, Antonino Rullo, V. S. Subrahmanian
The National Vulnerability Database (NVD) maintained by the US National Institute of Standards and Technology provides valuable information about vulnerabilities in popular software, as well as any patches available to address these vulnerabilities. Most enterprise security managers today simply patch the most dangerous vulnerabilities—an adversary can thus easily compromise an enterprise by using less important vulnerabilities to penetrate an enterprise. In this article, we capture the vulnerabilities in an enterprise as a Vulnerability Dependency Graph (VDG) and show that attacks graphs can be expressed in them. We first ask the question: What set of vulnerabilities should an attacker exploit in order to maximize his expected impact? We show that this problem can be solved as an integer linear program. The defender would obviously like to minimize the impact of the worst-case attack mounted by the attacker—but the defender also has an obligation to ensure a high productivity within his enterprise. We propose an algorithm that finds a Pareto-optimal solution for the defender that allows him to simultaneously maximize productivity and minimize the cost of patching products on the enterprise network. We have implemented this framework and show that runtimes of our computations are all within acceptable time bounds even for large VDGs containing 30K edges and that the balance between productivity and impact of attacks is also acceptable.
{"title":"Pareto-Optimal Adversarial Defense of Enterprise Systems","authors":"Edoardo Serra, S. Jajodia, Andrea Pugliese, Antonino Rullo, V. S. Subrahmanian","doi":"10.1145/2699907","DOIUrl":"https://doi.org/10.1145/2699907","url":null,"abstract":"The National Vulnerability Database (NVD) maintained by the US National Institute of Standards and Technology provides valuable information about vulnerabilities in popular software, as well as any patches available to address these vulnerabilities. Most enterprise security managers today simply patch the most dangerous vulnerabilities—an adversary can thus easily compromise an enterprise by using less important vulnerabilities to penetrate an enterprise. In this article, we capture the vulnerabilities in an enterprise as a Vulnerability Dependency Graph (VDG) and show that attacks graphs can be expressed in them. We first ask the question: What set of vulnerabilities should an attacker exploit in order to maximize his expected impact? We show that this problem can be solved as an integer linear program. The defender would obviously like to minimize the impact of the worst-case attack mounted by the attacker—but the defender also has an obligation to ensure a high productivity within his enterprise. We propose an algorithm that finds a Pareto-optimal solution for the defender that allows him to simultaneously maximize productivity and minimize the cost of patching products on the enterprise network. We have implemented this framework and show that runtimes of our computations are all within acceptable time bounds even for large VDGs containing 30K edges and that the balance between productivity and impact of attacks is also acceptable.","PeriodicalId":50912,"journal":{"name":"ACM Transactions on Information and System Security","volume":"1 1","pages":"11:1-11:39"},"PeriodicalIF":0.0,"publicationDate":"2015-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89371853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Internet provides an ideal anonymous channel for concealing computer-mediated malicious activities, as the network-based origins of critical electronic textual evidence (e.g., emails, blogs, forum posts, chat logs, etc.) can be easily repudiated. Authorship attribution is the study of identifying the actual author of the given anonymous documents based on the text itself, and for decades, many linguistic stylometry and computational techniques have been extensively studied for this purpose. However, most of the previous research emphasizes promoting the authorship attribution accuracy, and few works have been done for the purpose of constructing and visualizing the evidential traits. In addition, these sophisticated techniques are difficult for cyber investigators or linguistic experts to interpret. In this article, based on the End-to-End Digital Investigation (EEDI) framework, we propose a visualizable evidence-driven approach, namely VEA, which aims at facilitating the work of cyber investigation. Our comprehensive controlled experiment and the stratified experiment on the real-life Enron email dataset demonstrate that our approach can achieve even higher accuracy than traditional methods; meanwhile, its output can be easily visualized and interpreted as evidential traits. In addition to identifying the most plausible author of a given text, our approach also estimates the confidence for the predicted result based on a given identification context and presents visualizable linguistic evidence for each candidate.
{"title":"A Visualizable Evidence-Driven Approach for Authorship Attribution","authors":"Steven H. H. Ding, B. Fung, M. Debbabi","doi":"10.1145/2699910","DOIUrl":"https://doi.org/10.1145/2699910","url":null,"abstract":"The Internet provides an ideal anonymous channel for concealing computer-mediated malicious activities, as the network-based origins of critical electronic textual evidence (e.g., emails, blogs, forum posts, chat logs, etc.) can be easily repudiated. Authorship attribution is the study of identifying the actual author of the given anonymous documents based on the text itself, and for decades, many linguistic stylometry and computational techniques have been extensively studied for this purpose. However, most of the previous research emphasizes promoting the authorship attribution accuracy, and few works have been done for the purpose of constructing and visualizing the evidential traits. In addition, these sophisticated techniques are difficult for cyber investigators or linguistic experts to interpret. In this article, based on the End-to-End Digital Investigation (EEDI) framework, we propose a visualizable evidence-driven approach, namely VEA, which aims at facilitating the work of cyber investigation. Our comprehensive controlled experiment and the stratified experiment on the real-life Enron email dataset demonstrate that our approach can achieve even higher accuracy than traditional methods; meanwhile, its output can be easily visualized and interpreted as evidential traits. In addition to identifying the most plausible author of a given text, our approach also estimates the confidence for the predicted result based on a given identification context and presents visualizable linguistic evidence for each candidate.","PeriodicalId":50912,"journal":{"name":"ACM Transactions on Information and System Security","volume":"73 1","pages":"12:1-12:30"},"PeriodicalIF":0.0,"publicationDate":"2015-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83361498","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}