M. Alvim, K. Chatzikokolakis, Annabelle McIver, Carroll Morgan, C. Palamidessi, Geoffrey Smith
Protecting sensitive information from improper disclosure is a fundamental security goal. It is complicated, and difficult to achieve, often because of unavoidable or even unpredictable operating conditions that can lead to breaches in planned security defences. An attractive approach is to frame the goal as a quantitative problem, and then to design methods that measure system vulnerabilities in terms of the amount of information they leak. A consequence is that the precise operating conditions, and assumptions about prior knowledge, can play a crucial role in assessing the severity of any measured vunerability. We develop this theme by concentrating on vulnerability measures that are robust in the sense of allowing general leakage bounds to be placed on a program, bounds that apply whatever its operating conditions and whatever the prior knowledge might be. In particular we propose a theory of channel capacity, generalising the Shannon capacity of information theory, that can apply both to additive- and to multiplicative forms of a recently-proposed measure known as g-leakage. Further, we explore the computational aspects of calculating these (new) capacities: one of these scenarios can be solved efficiently by expressing it as a Kantorovich distance, but another turns out to be NP-complete. We also find capacity bounds for arbitrary correlations with data not directly accessed by the channel, as in the scenario of Dalenius's Desideratum.
{"title":"Additive and Multiplicative Notions of Leakage, and Their Capacities","authors":"M. Alvim, K. Chatzikokolakis, Annabelle McIver, Carroll Morgan, C. Palamidessi, Geoffrey Smith","doi":"10.1109/CSF.2014.29","DOIUrl":"https://doi.org/10.1109/CSF.2014.29","url":null,"abstract":"Protecting sensitive information from improper disclosure is a fundamental security goal. It is complicated, and difficult to achieve, often because of unavoidable or even unpredictable operating conditions that can lead to breaches in planned security defences. An attractive approach is to frame the goal as a quantitative problem, and then to design methods that measure system vulnerabilities in terms of the amount of information they leak. A consequence is that the precise operating conditions, and assumptions about prior knowledge, can play a crucial role in assessing the severity of any measured vunerability. We develop this theme by concentrating on vulnerability measures that are robust in the sense of allowing general leakage bounds to be placed on a program, bounds that apply whatever its operating conditions and whatever the prior knowledge might be. In particular we propose a theory of channel capacity, generalising the Shannon capacity of information theory, that can apply both to additive- and to multiplicative forms of a recently-proposed measure known as g-leakage. Further, we explore the computational aspects of calculating these (new) capacities: one of these scenarios can be solved efficiently by expressing it as a Kantorovich distance, but another turns out to be NP-complete. We also find capacity bounds for arbitrary correlations with data not directly accessed by the channel, as in the scenario of Dalenius's Desideratum.","PeriodicalId":285965,"journal":{"name":"2014 IEEE 27th Computer Security Foundations Symposium","volume":"118 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124411633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Attribute-based encryption (ABE) has attracted considerable attention in the research community in recent years. It has a number of applications such as broadcast encryption and the cryptographic enforcement of access control policies. Existing instantiations of ABE make use of access structures encoded either as trees of Shamir threshold secret-sharing schemes or monotone span programs. In both cases, the appropriate computations for these schemes are interleaved with the standard operations for cryptographic pairings. Moreover, the resulting schemes are not particularly appropriate for access control policies. In this paper, therefore, we start by examining the representation of access control policies and investigate alternative secret-sharing schemes that could be used to enforce them. We develop new ABE schemes based on the Benaloh-Leichter scheme, which employs only elementary arithmetic operations, and then extend this to arbitrary linear secret-sharing schemes. We then compare the complexity of existing schemes with our scheme based on Benaloh-Leichter.
{"title":"Attribute-Based Encryption for Access Control Using Elementary Operations","authors":"J. Crampton, A. Pinto","doi":"10.1109/CSF.2014.17","DOIUrl":"https://doi.org/10.1109/CSF.2014.17","url":null,"abstract":"Attribute-based encryption (ABE) has attracted considerable attention in the research community in recent years. It has a number of applications such as broadcast encryption and the cryptographic enforcement of access control policies. Existing instantiations of ABE make use of access structures encoded either as trees of Shamir threshold secret-sharing schemes or monotone span programs. In both cases, the appropriate computations for these schemes are interleaved with the standard operations for cryptographic pairings. Moreover, the resulting schemes are not particularly appropriate for access control policies. In this paper, therefore, we start by examining the representation of access control policies and investigate alternative secret-sharing schemes that could be used to enforce them. We develop new ABE schemes based on the Benaloh-Leichter scheme, which employs only elementary arithmetic operations, and then extend this to arbitrary linear secret-sharing schemes. We then compare the complexity of existing schemes with our scheme based on Benaloh-Leichter.","PeriodicalId":285965,"journal":{"name":"2014 IEEE 27th Computer Security Foundations Symposium","volume":"588 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123178915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. Jagadeesan, C. Lubinski, Corin Pitcher, J. Riely, Charles Winebrinner
Digital forensics reports typically document the search process that has led to a conclusion, the primary means to verify the report is to repeat the search process. We believe that, as a result, the Trusted Computing Base for digital forensics is unnecessarily large and opaque. We advocate the use of forensic certificates as intermediate artifacts between search and verification. Because a forensic certificate has a precise semantics, it can be verified without knowledge of the search process and forensic tools used to create it. In addition, this precision opens up avenues for the analysis of forensic specifications. We present a case study using the specification of a deleted file. We propose a verification architecture that addresses the enormous size of digital forensics data sets. As a proof of concept, we consider a computer intrusion case study, drawn from the Honey net project. Our Coq formalization yields a verifiable certificate of the correctness of the underlying forensic analysis.
{"title":"Certificates for Verifiable Forensics","authors":"R. Jagadeesan, C. Lubinski, Corin Pitcher, J. Riely, Charles Winebrinner","doi":"10.1109/CSF.2014.11","DOIUrl":"https://doi.org/10.1109/CSF.2014.11","url":null,"abstract":"Digital forensics reports typically document the search process that has led to a conclusion, the primary means to verify the report is to repeat the search process. We believe that, as a result, the Trusted Computing Base for digital forensics is unnecessarily large and opaque. We advocate the use of forensic certificates as intermediate artifacts between search and verification. Because a forensic certificate has a precise semantics, it can be verified without knowledge of the search process and forensic tools used to create it. In addition, this precision opens up avenues for the analysis of forensic specifications. We present a case study using the specification of a deleted file. We propose a verification architecture that addresses the enormous size of digital forensics data sets. As a proof of concept, we consider a computer intrusion case study, drawn from the Honey net project. Our Coq formalization yields a verifiable certificate of the correctness of the underlying forensic analysis.","PeriodicalId":285965,"journal":{"name":"2014 IEEE 27th Computer Security Foundations Symposium","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122671226","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
To achieve end-to-end security in a system built from parts, it is important to ensure that the composition of secure components is itself secure. This work investigates the compositionality of two popular conditions of possibilistic noninterference. The first condition, progress-insensitive noninterference (PINI), is the security condition enforced by practical tools like JSFlow, Paragon, sequential LIO, Jif, Flow Caml, and SPARK Examiner. We show that this condition is not preserved under fair parallel composition: composing a PINI system fairly with another PINI system can yield an insecure system. We explore constraints that allow recovering compositionality for PINI. Further, we develop a theory of compositional reasoning. In contrast to PINI, we show what PSNI behaves well under composition, with and without fairness assumptions. Our work is performed within a general framework for nondeterministic interactive systems.
{"title":"Compositional Information-Flow Security for Interactive Systems","authors":"Willard Rafnsson, A. Sabelfeld","doi":"10.1109/CSF.2014.27","DOIUrl":"https://doi.org/10.1109/CSF.2014.27","url":null,"abstract":"To achieve end-to-end security in a system built from parts, it is important to ensure that the composition of secure components is itself secure. This work investigates the compositionality of two popular conditions of possibilistic noninterference. The first condition, progress-insensitive noninterference (PINI), is the security condition enforced by practical tools like JSFlow, Paragon, sequential LIO, Jif, Flow Caml, and SPARK Examiner. We show that this condition is not preserved under fair parallel composition: composing a PINI system fairly with another PINI system can yield an insecure system. We explore constraints that allow recovering compositionality for PINI. Further, we develop a theory of compositional reasoning. In contrast to PINI, we show what PSNI behaves well under composition, with and without fairness assumptions. Our work is performed within a general framework for nondeterministic interactive systems.","PeriodicalId":285965,"journal":{"name":"2014 IEEE 27th Computer Security Foundations Symposium","volume":"19 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114031367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Many protocols use Diffie-Hellman key agreement, combined with certified long-term values or digital signatures for authentication. These protocols aim at security goals such as key secrecy, forward secrecy, resistance to key compromise attacks, and various flavors of authentication. However, these protocols are challenging to analyze, both in computational and symbolic models. An obstacle in the symbolic model is the undecidability of unification in many theories in the signature of rings. In this paper, we develop an algebraic version of the symbolic approach, working directly within finite fields, the natural structures for the protocols. The adversary, in giving an attack on a protocol goal in a finite field, may rely on any identity in that field. He defeats the protocol if there are attacks in infinitely many finite fields. We prove that, even for this strong adversary, security goals for a wide class of protocols are decidable.
{"title":"Decidability for Lightweight Diffie-Hellman Protocols","authors":"Daniel J. Dougherty, J. Guttman","doi":"10.1109/CSF.2014.23","DOIUrl":"https://doi.org/10.1109/CSF.2014.23","url":null,"abstract":"Many protocols use Diffie-Hellman key agreement, combined with certified long-term values or digital signatures for authentication. These protocols aim at security goals such as key secrecy, forward secrecy, resistance to key compromise attacks, and various flavors of authentication. However, these protocols are challenging to analyze, both in computational and symbolic models. An obstacle in the symbolic model is the undecidability of unification in many theories in the signature of rings. In this paper, we develop an algebraic version of the symbolic approach, working directly within finite fields, the natural structures for the protocols. The adversary, in giving an attack on a protocol goal in a finite field, may rely on any identity in that field. He defeats the protocol if there are attacks in infinitely many finite fields. We prove that, even for this strong adversary, security goals for a wide class of protocols are decidable.","PeriodicalId":285965,"journal":{"name":"2014 IEEE 27th Computer Security Foundations Symposium","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132305092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This risk of catastrophe from an attack is a consequence of a network's structure formed by the connected individuals, businesses and computer systems. Understanding the likelihood of extreme events, or, more generally, the probability distribution of the number of compromised nodes is an essential requirement to provide risk-mitigation or cyber-insurance. However, previous network security research has not considered features of these distributions beyond their first central moments, while previous cyber-insurance research has not considered the effect of topologies on the supply side. We provide a mathematical basis for bridging this gap: we study the complexity of computing these loss-number distributions, both generally and for special cases of common real-world networks. In the case of scale-free networks, we demonstrate that expected loss alone cannot determine the riskiness of a network, and that this riskiness cannot be naively estimated from smaller samples, which highlights the lack/importance of topological data in security incident reporting.
{"title":"The Complexity of Estimating Systematic Risk in Networks","authors":"Benjamin Johnson, Aron Laszka, Jens Grossklags","doi":"10.1109/CSF.2014.30","DOIUrl":"https://doi.org/10.1109/CSF.2014.30","url":null,"abstract":"This risk of catastrophe from an attack is a consequence of a network's structure formed by the connected individuals, businesses and computer systems. Understanding the likelihood of extreme events, or, more generally, the probability distribution of the number of compromised nodes is an essential requirement to provide risk-mitigation or cyber-insurance. However, previous network security research has not considered features of these distributions beyond their first central moments, while previous cyber-insurance research has not considered the effect of topologies on the supply side. We provide a mathematical basis for bridging this gap: we study the complexity of computing these loss-number distributions, both generally and for special cases of common real-world networks. In the case of scale-free networks, we demonstrate that expected loss alone cannot determine the riskiness of a network, and that this riskiness cannot be naively estimated from smaller samples, which highlights the lack/importance of topological data in security incident reporting.","PeriodicalId":285965,"journal":{"name":"2014 IEEE 27th Computer Security Foundations Symposium","volume":"302 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134290380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper addresses security and safety choices that involve a decision on the timing of an action. Examples of such decisions include when to check log files for intruders and when to monitor financial accounts for fraud or errors. To better understand how performance in timing-related security situations is shaped by individuals' cognitive predispositions, we effectively combine survey measures with economic experiments. Two behavioral experiments are presented in which the timing of online security actions is the critical decision-making factor. The feedback modality in the decision-environment is varied between visual feedback with history (Experiment 1), and temporal feedback without history (Experiment 2). Using psychometric scales, we study the role of individual difference variables, specifically risk propensity and need for cognition. The analysis is based on the data from over 450 participants. We find that risk propensity is not a hindrance in timing tasks. Participants of average risk propensity generally benefit from a reflective disposition (high need for cognition), particularly when visual feedback is given. Overall, participants benefit from need for cognition, however, in the more difficult, temporal-estimation task, this requires familiarity with the task.
{"title":"How Task Familiarity and Cognitive Predispositions Impact Behavior in a Security Game of Timing","authors":"Jens Grossklags, D. Reitter","doi":"10.1109/CSF.2014.16","DOIUrl":"https://doi.org/10.1109/CSF.2014.16","url":null,"abstract":"This paper addresses security and safety choices that involve a decision on the timing of an action. Examples of such decisions include when to check log files for intruders and when to monitor financial accounts for fraud or errors. To better understand how performance in timing-related security situations is shaped by individuals' cognitive predispositions, we effectively combine survey measures with economic experiments. Two behavioral experiments are presented in which the timing of online security actions is the critical decision-making factor. The feedback modality in the decision-environment is varied between visual feedback with history (Experiment 1), and temporal feedback without history (Experiment 2). Using psychometric scales, we study the role of individual difference variables, specifically risk propensity and need for cognition. The analysis is based on the data from over 450 participants. We find that risk propensity is not a hindrance in timing tasks. Participants of average risk propensity generally benefit from a reflective disposition (high need for cognition), particularly when visual feedback is given. Overall, participants benefit from need for cognition, however, in the more difficult, temporal-estimation task, this requires familiarity with the task.","PeriodicalId":285965,"journal":{"name":"2014 IEEE 27th Computer Security Foundations Symposium","volume":"99 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122588331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Block ciphers such as AES are deterministic, keyed functions that operate on small, fixed-size blocks. Block-cipher modes of operation define a mechanism for probabilistic encryption of arbitrary length messages using any underlying block cipher. A mode of operation can be proven secure (say, against chosen-plaintext attacks) based on the assumption that the underlying block cipher is a pseudorandom function. Such proofs are complex and error-prone, however, and must be done from scratch whenever a new mode of operation is developed. We propose an automated approach for the security analysis of block-cipher modes of operation based on a "local" analysis of the steps carried out by the mode when handling a single message block. We model these steps as a directed, acyclic graph, with nodes corresponding to instructions and edges corresponding to intermediate values. We then introduce a set of labels and constraints on the edges, and prove a meta-theorem showing that any mode for which there exists a labeling of the edges satisfying these constraints is secure (against chosen-plaintext attacks). This allows us to reduce security of a given mode to a constraint-satisfaction problem, which in turn can be handled using an SMT solver. We couple our security-analysis tool with a routine that automatically generates viable modes, together, these allow us to synthesize hundreds of secure modes.
{"title":"Automated Analysis and Synthesis of Block-Cipher Modes of Operation","authors":"A. Malozemoff, Jonathan Katz, M. Green","doi":"10.1109/CSF.2014.18","DOIUrl":"https://doi.org/10.1109/CSF.2014.18","url":null,"abstract":"Block ciphers such as AES are deterministic, keyed functions that operate on small, fixed-size blocks. Block-cipher modes of operation define a mechanism for probabilistic encryption of arbitrary length messages using any underlying block cipher. A mode of operation can be proven secure (say, against chosen-plaintext attacks) based on the assumption that the underlying block cipher is a pseudorandom function. Such proofs are complex and error-prone, however, and must be done from scratch whenever a new mode of operation is developed. We propose an automated approach for the security analysis of block-cipher modes of operation based on a \"local\" analysis of the steps carried out by the mode when handling a single message block. We model these steps as a directed, acyclic graph, with nodes corresponding to instructions and edges corresponding to intermediate values. We then introduce a set of labels and constraints on the edges, and prove a meta-theorem showing that any mode for which there exists a labeling of the edges satisfying these constraints is secure (against chosen-plaintext attacks). This allows us to reduce security of a given mode to a constraint-satisfaction problem, which in turn can be handled using an SMT solver. We couple our security-analysis tool with a routine that automatically generates viable modes, together, these allow us to synthesize hundreds of secure modes.","PeriodicalId":285965,"journal":{"name":"2014 IEEE 27th Computer Security Foundations Symposium","volume":"261 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122899408","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Research on information flow security for concurrent programs usually assumes sequential consistency although modern multi-core processors often support weaker consistency guarantees. In this article, we clarify the impact that relaxations of sequential consistency have on information flow security. We consider four memory models and prove for each of them that information flow security under this model does not imply information flow security in any of the other models. This result suggests that research on security needs to pay more attention to the consistency guarantees provided by contemporary hardware. The other main technical contribution of this article is a program transformation that soundly enforces information flow security under different memory models. This program transformation is significantly less restrictive than a transformation that first establishes sequential consistency and then applies a traditional information flow analysis for concurrent programs.
{"title":"Noninterference under Weak Memory Models","authors":"H. Mantel, Matthias Perner, Jens Sauer","doi":"10.1109/CSF.2014.14","DOIUrl":"https://doi.org/10.1109/CSF.2014.14","url":null,"abstract":"Research on information flow security for concurrent programs usually assumes sequential consistency although modern multi-core processors often support weaker consistency guarantees. In this article, we clarify the impact that relaxations of sequential consistency have on information flow security. We consider four memory models and prove for each of them that information flow security under this model does not imply information flow security in any of the other models. This result suggests that research on security needs to pay more attention to the consistency guarantees provided by contemporary hardware. The other main technical contribution of this article is a program transformation that soundly enforces information flow security under different memory models. This program transformation is significantly less restrictive than a transformation that first establishes sequential consistency and then applies a traditional information flow analysis for concurrent programs.","PeriodicalId":285965,"journal":{"name":"2014 IEEE 27th Computer Security Foundations Symposium","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114803164","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Attack trees are widely used to represent threat scenarios in a succinct and intuitive manner, suitable for conveying security information to non-experts. The manual construction of such objects relies on the creativity and experience of specialists, and therefore it is error-prone and impracticable for large systems. Nonetheless, the automated generation of attack trees has only been explored in connection to computer networks and levering rich models, whose analysis typically leads to an exponential blow-up of the state space. We propose a static analysis approach where attack trees are automatically inferred from a process algebraic specification in a syntax-directed fashion, encompassing a great many application domains and avoiding incurring systematically an exponential explosion. Moreover, we show how the standard propositional denotation of an attack tree can be used to phrase interesting quantitative problems, that can be solved through an encoding into Satisfiability Modulo Theories. The flexibility and effectiveness of the approach is demonstrated on the study of a national-scale authentication system, whose attack tree is computed thanks to a Java implementation of the framework.
{"title":"Automated Generation of Attack Trees","authors":"R. Vigo, F. Nielson, H. R. Nielson","doi":"10.1109/CSF.2014.31","DOIUrl":"https://doi.org/10.1109/CSF.2014.31","url":null,"abstract":"Attack trees are widely used to represent threat scenarios in a succinct and intuitive manner, suitable for conveying security information to non-experts. The manual construction of such objects relies on the creativity and experience of specialists, and therefore it is error-prone and impracticable for large systems. Nonetheless, the automated generation of attack trees has only been explored in connection to computer networks and levering rich models, whose analysis typically leads to an exponential blow-up of the state space. We propose a static analysis approach where attack trees are automatically inferred from a process algebraic specification in a syntax-directed fashion, encompassing a great many application domains and avoiding incurring systematically an exponential explosion. Moreover, we show how the standard propositional denotation of an attack tree can be used to phrase interesting quantitative problems, that can be solved through an encoding into Satisfiability Modulo Theories. The flexibility and effectiveness of the approach is demonstrated on the study of a national-scale authentication system, whose attack tree is computed thanks to a Java implementation of the framework.","PeriodicalId":285965,"journal":{"name":"2014 IEEE 27th Computer Security Foundations Symposium","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114346094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}