FoCaLiZe is an object-oriented programming environment that combines specifications, programs and proofs in the same language. This paper describes how its features can be used to formally express specifications and to develop by stepwise refinement the design and implementation of secured systems, while proving that the implementation meets its specification or design requirements. We thus obtain a modular implementation of a generic framework for the definition of security policies together with certified enforcement mechanism for these policies.
{"title":"Development of secured systems by mixing programs, specifications and proofs in an object-oriented programming environment: a case study within the FoCaLiZe environment","authors":"Damien Doligez, M. Jaume, R. Rioboo","doi":"10.1145/2336717.2336726","DOIUrl":"https://doi.org/10.1145/2336717.2336726","url":null,"abstract":"FoCaLiZe is an object-oriented programming environment that combines specifications, programs and proofs in the same language. This paper describes how its features can be used to formally express specifications and to develop by stepwise refinement the design and implementation of secured systems, while proving that the implementation meets its specification or design requirements. We thus obtain a modular implementation of a generic framework for the definition of security policies together with certified enforcement mechanism for these policies.","PeriodicalId":149360,"journal":{"name":"Proceedings of the 7th Workshop on Programming Languages and Analysis for Security","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126700718","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A software product line encodes a potentially large variety of software products as variants of some common code base, e.g., through the use of #ifdef statements or other forms of conditional compilation. Traditional information-flow analyses cannot cope with such constructs. Hence, to check for possibly insecure information flow in a product line, one currently has to analyze each resulting product separately, of which there may be thousands, making this task intractable. We report about ongoing work that will instead enable users to check the security of information flows in entire software product lines in one single pass, without having to generate individual products from the product line. Executing the analysis on the product line promises to be orders of magnitude more faster than analyzing products individually. We discuss the design of our information-flow analysis and our ongoing implementation using the IFDS/IDE framework by Reps, Horwitz and Sagiv.
{"title":"Static flow-sensitive & context-sensitive information-flow analysis for software product lines: position paper","authors":"E. Bodden","doi":"10.1145/2336717.2336723","DOIUrl":"https://doi.org/10.1145/2336717.2336723","url":null,"abstract":"A software product line encodes a potentially large variety of software products as variants of some common code base, e.g., through the use of #ifdef statements or other forms of conditional compilation. Traditional information-flow analyses cannot cope with such constructs. Hence, to check for possibly insecure information flow in a product line, one currently has to analyze each resulting product separately, of which there may be thousands, making this task intractable. We report about ongoing work that will instead enable users to check the security of information flows in entire software product lines in one single pass, without having to generate individual products from the product line. Executing the analysis on the product line promises to be orders of magnitude more faster than analyzing products individually. We discuss the design of our information-flow analysis and our ongoing implementation using the IFDS/IDE framework by Reps, Horwitz and Sagiv.","PeriodicalId":149360,"journal":{"name":"Proceedings of the 7th Workshop on Programming Languages and Analysis for Security","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123740906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Software security attacks represent an ever growing problem. One way to make software more secure is to use Inlined Reference Monitors (IRMs), which allow security specifications to be inlined inside a target program to ensure its compliance with the desired security specifications. The IRM approach has been developed primarily by the security community. Runtime Verification (RV), on the other hand, is a software engineering approach, which is intended to formally encode system specifications within a target program such that those specifications can be later enforced during the execution of the program. Until now, the IRM and RV approaches have lived separate lives; in particular RV techniques have not been applied to the security domain, being used instead to aid program correctness and testing. This paper discusses the usage of a formalism-generic RV system, JavaMOP, as a means to specify IRMs, leveraging the careful engineering of the JavaMOP system for ensuring secure operation of software in an efficient manner.
{"title":"Security-policy monitoring and enforcement with JavaMOP","authors":"Soha Hussein, P. Meredith, Grigore Roşu","doi":"10.1145/2336717.2336720","DOIUrl":"https://doi.org/10.1145/2336717.2336720","url":null,"abstract":"Software security attacks represent an ever growing problem. One way to make software more secure is to use Inlined Reference Monitors (IRMs), which allow security specifications to be inlined inside a target program to ensure its compliance with the desired security specifications. The IRM approach has been developed primarily by the security community. Runtime Verification (RV), on the other hand, is a software engineering approach, which is intended to formally encode system specifications within a target program such that those specifications can be later enforced during the execution of the program. Until now, the IRM and RV approaches have lived separate lives; in particular RV techniques have not been applied to the security domain, being used instead to aid program correctness and testing. This paper discusses the usage of a formalism-generic RV system, JavaMOP, as a means to specify IRMs, leveraging the careful engineering of the JavaMOP system for ensuring secure operation of software in an efficient manner.","PeriodicalId":149360,"journal":{"name":"Proceedings of the 7th Workshop on Programming Languages and Analysis for Security","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129358614","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cloud computing is generally understood as the distribution of data and computations over the Internet. Over the past years, there has been a steep increase in web sites using this technology. Unfortunately, those web sites are not exempted from injection flaws and cross-site scripting, two of the most common security risks in web applications. Taint analysis is an automatic approach to detect vulnerabilities. Cloud computing platforms possess several features that, while facilitating the development of web applications, make it difficult to apply off-the-shelf taint analysis techniques. More specifically, several of the existing taint analysis techniques do not deal with persistent storage (e.g. object datastores), opaque objects (objects whose implementation cannot be accessed and thus tracking tainted data becomes a challenge), or a rich set of security policies (e.g. forcing a specific order of sanitizers to be applied). We propose a taint analysis for could computing web applications that consider these aspects. Rather than modifying interpreters or compilers, we provide taint analysis via a Python library for the cloud computing platform Google App Engine (GAE). To evaluate the use of our library, we harden an existing GAE web application against cross-site scripting attacks.
{"title":"Towards a taint mode for cloud computing web applications","authors":"Luciano Bello, Alejandro Russo","doi":"10.1145/2336717.2336724","DOIUrl":"https://doi.org/10.1145/2336717.2336724","url":null,"abstract":"Cloud computing is generally understood as the distribution of data and computations over the Internet. Over the past years, there has been a steep increase in web sites using this technology. Unfortunately, those web sites are not exempted from injection flaws and cross-site scripting, two of the most common security risks in web applications. Taint analysis is an automatic approach to detect vulnerabilities. Cloud computing platforms possess several features that, while facilitating the development of web applications, make it difficult to apply off-the-shelf taint analysis techniques. More specifically, several of the existing taint analysis techniques do not deal with persistent storage (e.g. object datastores), opaque objects (objects whose implementation cannot be accessed and thus tracking tainted data becomes a challenge), or a rich set of security policies (e.g. forcing a specific order of sanitizers to be applied). We propose a taint analysis for could computing web applications that consider these aspects. Rather than modifying interpreters or compilers, we provide taint analysis via a Python library for the cloud computing platform Google App Engine (GAE). To evaluate the use of our library, we harden an existing GAE web application against cross-site scripting attacks.","PeriodicalId":149360,"journal":{"name":"Proceedings of the 7th Workshop on Programming Languages and Analysis for Security","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131530430","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This article considers the synthesis of two long-standing lines of research in computer security: security correctness for multilevel databases, and language-based security. The motivation is an approach to supporting end-to-end security for a wide class of enterprise applications, those of concurrent transactional applications. The approach extends nested transactions with retroactive abort, a new form of semantics for transactional execution, motivated by security concerns. A semantics is given in terms of a local constrained labelled transition system, the TauOne calculus. This allows a noninterference result to be verified based on adapting results on observational equivalence from concurrency theory.
{"title":"Security correctness for secure nested transactions: position paper","authors":"Dominic Duggan, Ye Wu","doi":"10.1145/2336717.2336721","DOIUrl":"https://doi.org/10.1145/2336717.2336721","url":null,"abstract":"This article considers the synthesis of two long-standing lines of research in computer security: security correctness for multilevel databases, and language-based security. The motivation is an approach to supporting end-to-end security for a wide class of enterprise applications, those of concurrent transactional applications. The approach extends nested transactions with retroactive abort, a new form of semantics for transactional execution, motivated by security concerns. A semantics is given in terms of a local constrained labelled transition system, the TauOne calculus. This allows a noninterference result to be verified based on adapting results on observational equivalence from concurrency theory.","PeriodicalId":149360,"journal":{"name":"Proceedings of the 7th Workshop on Programming Languages and Analysis for Security","volume":"156-157 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133058661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Specification of information flow policies is classically based on a security labeling and a lattice of security levels that establishes how information can flow between security levels. We present a type and effect system for determining the least permissive relaxation of a given confidentiality policy that allows to type a program, given a fixed security labeling. To this end, sets of illegal information flows are represented as downward closure operators (here referred to as flow kernels) on a given lattice of security levels. Illegal information flows can then be seen as program effects, and their representation as flow kernels subsumes in granularity previous lattice-oriented representations of information flow policies. Effect soundness, optimality and preservation results are presented for the proposed type and effect system, for programs written in a concurrent higher-order imperative lambda-calculus with reference creation. Our type and effect system provides a mechanism for deriving the flow kernel that characterizes the illegal flows that occur within a program, and which can be used to support runtime decisions of compliance to other policies. This point is illustrated by means of an application to a setting where local programs run under the control of a dynamic allowed flow policy.
{"title":"Typing illegal information flows as program effects","authors":"Ana Gualdina Almeida Matos, J. Santos","doi":"10.1145/2336717.2336718","DOIUrl":"https://doi.org/10.1145/2336717.2336718","url":null,"abstract":"Specification of information flow policies is classically based on a security labeling and a lattice of security levels that establishes how information can flow between security levels. We present a type and effect system for determining the least permissive relaxation of a given confidentiality policy that allows to type a program, given a fixed security labeling. To this end, sets of illegal information flows are represented as downward closure operators (here referred to as flow kernels) on a given lattice of security levels. Illegal information flows can then be seen as program effects, and their representation as flow kernels subsumes in granularity previous lattice-oriented representations of information flow policies. Effect soundness, optimality and preservation results are presented for the proposed type and effect system, for programs written in a concurrent higher-order imperative lambda-calculus with reference creation. Our type and effect system provides a mechanism for deriving the flow kernel that characterizes the illegal flows that occur within a program, and which can be used to support runtime decisions of compliance to other policies. This point is illustrated by means of an application to a setting where local programs run under the control of a dynamic allowed flow policy.","PeriodicalId":149360,"journal":{"name":"Proceedings of the 7th Workshop on Programming Languages and Analysis for Security","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116444673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The ACM SIGPLAN 7th Workshop on Programming Languages and Analysis for Security (PLAS) was held on June 15th, 2012 as a satellite event of PLDI 2012 in Beijing, China. The workshop featured six full papers and three position papers. The workshop also featured invited talks by Andrew Myers of Cornell University and Gilles Barthe of IMDEA Software.
{"title":"Proceedings of the 7th Workshop on Programming Languages and Analysis for Security","authors":"S. Maffeis, Tamara Rezk","doi":"10.1145/2336717","DOIUrl":"https://doi.org/10.1145/2336717","url":null,"abstract":"The ACM SIGPLAN 7th Workshop on Programming Languages and Analysis for Security (PLAS) was held on June 15th, 2012 as a satellite event of PLDI 2012 in Beijing, China. The workshop featured six full papers and three position papers. The workshop also featured invited talks by Andrew Myers of Cornell University and Gilles Barthe of IMDEA Software.","PeriodicalId":149360,"journal":{"name":"Proceedings of the 7th Workshop on Programming Languages and Analysis for Security","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129306431","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
When modelling access control in distributed systems, the problem of security policies composition arises. Much work has been done on different ways of combining policies, and using different logics to do this. In this paper, we propose a more general approach based on a 4-valued logic, that abstracts from the specific setting, and groups together many of the existing ways for combining policies. Moreover, we propose going one step further, by twisting the 4-valued logic and obtaining a more traditional approach that might therefore be more appropriate for analysis.
{"title":"A generic approach for security policies composition: position paper","authors":"A. Hernandez, F. Nielson","doi":"10.1145/2336717.2336722","DOIUrl":"https://doi.org/10.1145/2336717.2336722","url":null,"abstract":"When modelling access control in distributed systems, the problem of security policies composition arises. Much work has been done on different ways of combining policies, and using different logics to do this. In this paper, we propose a more general approach based on a 4-valued logic, that abstracts from the specific setting, and groups together many of the existing ways for combining policies. Moreover, we propose going one step further, by twisting the 4-valued logic and obtaining a more traditional approach that might therefore be more appropriate for analysis.","PeriodicalId":149360,"journal":{"name":"Proceedings of the 7th Workshop on Programming Languages and Analysis for Security","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128213157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As web applications have grown in popularity, so have attacks on such applications. Cross-site scripting and injection attacks have become particularly problematic. Both vulnerabilities stem, at their core, from improper sanitization of user input. We propose static taint analysis, which can verify the absence of unsanitized input errors at compile-time. Unfortunately, precise static analysis of modern scripting languages like Python is challenging: higher-orderness and complex control-flow collide with opaque, dynamic data structures like hash maps and objects. The interdependence of data-flow and control-flow make it hard to attain both soundness and precision. In this work, we apply abstract interpretation to sound and precise taint-style static analysis of scripting languages. We first define λH, a core calculus of modern scripting languages, with hash maps, dynamic objects, higher-order functions and first class control. Then we derive a framework of k-CFA-like CESK-style abstract machines for statically reasoning about λH, but with hash maps factored into a "Curried Object store." The Curried object store---and shape analysis on this store---allows us to recover field sensitivity, even in the presence of dynamically modified fields. Lastly, atop this framework, we devise a taint-flow analysis, leveraging its field-sensitive, interprocedural and context-sensitive properties to soundly and precisely detect security vulnerabilities, like XSS attacks in web applications. We have prototyped the analytical framework for Python, and conducted preliminary experiments with web applications. A low rate of false alarms demonstrates the promise of this approach.
{"title":"Hash-flow taint analysis of higher-order programs","authors":"Shuying Liang, M. Might","doi":"10.1145/2336717.2336725","DOIUrl":"https://doi.org/10.1145/2336717.2336725","url":null,"abstract":"As web applications have grown in popularity, so have attacks on such applications. Cross-site scripting and injection attacks have become particularly problematic. Both vulnerabilities stem, at their core, from improper sanitization of user input. We propose static taint analysis, which can verify the absence of unsanitized input errors at compile-time. Unfortunately, precise static analysis of modern scripting languages like Python is challenging: higher-orderness and complex control-flow collide with opaque, dynamic data structures like hash maps and objects. The interdependence of data-flow and control-flow make it hard to attain both soundness and precision. In this work, we apply abstract interpretation to sound and precise taint-style static analysis of scripting languages. We first define λH, a core calculus of modern scripting languages, with hash maps, dynamic objects, higher-order functions and first class control. Then we derive a framework of k-CFA-like CESK-style abstract machines for statically reasoning about λH, but with hash maps factored into a \"Curried Object store.\" The Curried object store---and shape analysis on this store---allows us to recover field sensitivity, even in the presence of dynamically modified fields. Lastly, atop this framework, we devise a taint-flow analysis, leveraging its field-sensitive, interprocedural and context-sensitive properties to soundly and precisely detect security vulnerabilities, like XSS attacks in web applications. We have prototyped the analytical framework for Python, and conducted preliminary experiments with web applications. A low rate of false alarms demonstrates the promise of this approach.","PeriodicalId":149360,"journal":{"name":"Proceedings of the 7th Workshop on Programming Languages and Analysis for Security","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132481487","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Piotr (Peter) Mardziel, M. Hicks, Jonathan Katz, M. Srivatsa
Protocols for secure multiparty computation (SMC) allow a set of mutually distrusting parties to compute a function f of their private inputs while revealing nothing about their inputs beyond what is implied by the result. Depending on f, however, the result itself may reveal more information than parties are comfortable with. Almost all previous work on SMC treats f as given. Left unanswered is the question of how parties should decide whether it is "safe" for them to compute f in the first place. We propose here a way to apply belief tracking to SMC in order to address exactly this question. In our approach, each participating party is able to reason about the increase in knowledge that other parties could gain as a result of computing f, and may choose not to participate (or participate only partially) so as to restrict that gain in knowledge. We develop two techniques---the belief set method and the SMC belief tracking method---prove them sound, and discuss their precision/performance tradeoffs using a series of experiments.
{"title":"Knowledge-oriented secure multiparty computation","authors":"Piotr (Peter) Mardziel, M. Hicks, Jonathan Katz, M. Srivatsa","doi":"10.1145/2336717.2336719","DOIUrl":"https://doi.org/10.1145/2336717.2336719","url":null,"abstract":"Protocols for secure multiparty computation (SMC) allow a set of mutually distrusting parties to compute a function f of their private inputs while revealing nothing about their inputs beyond what is implied by the result. Depending on f, however, the result itself may reveal more information than parties are comfortable with. Almost all previous work on SMC treats f as given. Left unanswered is the question of how parties should decide whether it is \"safe\" for them to compute f in the first place. We propose here a way to apply belief tracking to SMC in order to address exactly this question. In our approach, each participating party is able to reason about the increase in knowledge that other parties could gain as a result of computing f, and may choose not to participate (or participate only partially) so as to restrict that gain in knowledge. We develop two techniques---the belief set method and the SMC belief tracking method---prove them sound, and discuss their precision/performance tradeoffs using a series of experiments.","PeriodicalId":149360,"journal":{"name":"Proceedings of the 7th Workshop on Programming Languages and Analysis for Security","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115438666","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}