P. Adão, Claudio Bozzato, G. Rossi, R. Focardi, F. Luccio
The management and specification of access control rules that enforce a given policy is a non-trivial, complex, and time consuming task. In this paper we aim at simplifying this task both at specification and verification levels. For that, we propose a formal model of Net filter, a firewall system integrated in the Linux kernel. We define an abstraction of the concepts of chains, rules, and packets existent in Net filter configurations, and give a semantics that mimics packet filtering and address translation. We then introduce a simple but powerful language that permits to specify firewall configurations that are unaffected by the relative ordering of rules, and that does not depend on the underlying Net filter chains. We give a semantics for this language and show that it can be translated into our Net filter abstraction. We then present Mignis, a publicly available tool that translates abstract firewall specifications into real Net filter configurations. Mignis is currently used to configure the whole firewall of the DAIS Department of Ca' Foscari University.
{"title":"Mignis: A Semantic Based Tool for Firewall Configuration","authors":"P. Adão, Claudio Bozzato, G. Rossi, R. Focardi, F. Luccio","doi":"10.1109/CSF.2014.32","DOIUrl":"https://doi.org/10.1109/CSF.2014.32","url":null,"abstract":"The management and specification of access control rules that enforce a given policy is a non-trivial, complex, and time consuming task. In this paper we aim at simplifying this task both at specification and verification levels. For that, we propose a formal model of Net filter, a firewall system integrated in the Linux kernel. We define an abstraction of the concepts of chains, rules, and packets existent in Net filter configurations, and give a semantics that mimics packet filtering and address translation. We then introduce a simple but powerful language that permits to specify firewall configurations that are unaffected by the relative ordering of rules, and that does not depend on the underlying Net filter chains. We give a semantics for this language and show that it can be translated into our Net filter abstraction. We then present Mignis, a publicly available tool that translates abstract firewall specifications into real Net filter configurations. Mignis is currently used to configure the whole firewall of the DAIS Department of Ca' Foscari University.","PeriodicalId":285965,"journal":{"name":"2014 IEEE 27th Computer Security Foundations Symposium","volume":"92 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132453983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Bugliesi, Stefano Calzavara, R. Focardi, Wilayat Khan, M. Tempesta
Enforcing protection at the browser side has recently become a popular approach for securing web authentication. Though interesting, existing attempts in the literature only address specific classes of attacks, and thus fall short of providing robust foundations to reason on web authentication security. In this paper we provide such foundations, by introducing a novel notion of web session integrity, which allows us to capture many existing attacks and spot some new ones. We then propose FF+, a security-enhanced model of a web browser that provides a full-fledged and provably sound enforcement of web session integrity. We leverage our theory to develop Sess Int, a prototype extension for Google Chrome implementing the security mechanisms formalized in FF+. Sess Int provides a level of security very close to FF+, while keeping an eye at usability and user experience.
{"title":"Provably Sound Browser-Based Enforcement of Web Session Integrity","authors":"M. Bugliesi, Stefano Calzavara, R. Focardi, Wilayat Khan, M. Tempesta","doi":"10.1109/CSF.2014.33","DOIUrl":"https://doi.org/10.1109/CSF.2014.33","url":null,"abstract":"Enforcing protection at the browser side has recently become a popular approach for securing web authentication. Though interesting, existing attempts in the literature only address specific classes of attacks, and thus fall short of providing robust foundations to reason on web authentication security. In this paper we provide such foundations, by introducing a novel notion of web session integrity, which allows us to capture many existing attacks and spot some new ones. We then propose FF+, a security-enhanced model of a web browser that provides a full-fledged and provably sound enforcement of web session integrity. We leverage our theory to develop Sess Int, a prototype extension for Google Chrome implementing the security mechanisms formalized in FF+. Sess Int provides a level of security very close to FF+, while keeping an eye at usability and user experience.","PeriodicalId":285965,"journal":{"name":"2014 IEEE 27th Computer Security Foundations Symposium","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131207980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In formal verification, cryptographic messages are often represented by algebraic terms. This abstracts not only from the intricate details of the real cryptography, but also from the details of the non-cryptographic aspects: the actual formatting and structuring of messages. We introduce a new algebraic model to include these details and define a small, simple language to precisely describe message formats. We support fixed-length fields, variable-length fields with offsets, tags, and encodings into smaller alphabets like Base64, thereby covering both classical formats as in TLS and modern XML-based formats. We define two reasonable properties for a set of formats used in a protocol suite. First, each format should be un-ambiguous: any string can be parsed in at most one way. Second, the formats should be pair wise disjoint: a string can be parsed as at most one of the formats. We show how to easily establish these properties for many practical formats. By replacing the formats with free function symbols we obtain an abstract model that is compatible with all existing verification tools. We prove that the abstraction is sound for un-ambiguous, disjoint formats: there is an attack in the concrete message model if there is one in the abstract message model. Finally we present highlights of a practical case study on TLS.
{"title":"A Sound Abstraction of the Parsing Problem","authors":"S. Mödersheim, Georgios Katsoris","doi":"10.1109/CSF.2014.26","DOIUrl":"https://doi.org/10.1109/CSF.2014.26","url":null,"abstract":"In formal verification, cryptographic messages are often represented by algebraic terms. This abstracts not only from the intricate details of the real cryptography, but also from the details of the non-cryptographic aspects: the actual formatting and structuring of messages. We introduce a new algebraic model to include these details and define a small, simple language to precisely describe message formats. We support fixed-length fields, variable-length fields with offsets, tags, and encodings into smaller alphabets like Base64, thereby covering both classical formats as in TLS and modern XML-based formats. We define two reasonable properties for a set of formats used in a protocol suite. First, each format should be un-ambiguous: any string can be parsed in at most one way. Second, the formats should be pair wise disjoint: a string can be parsed as at most one of the formats. We show how to easily establish these properties for many practical formats. By replacing the formats with free function symbols we obtain an abstract model that is compatible with all existing verification tools. We prove that the abstraction is sound for un-ambiguous, disjoint formats: there is an attack in the concrete message model if there is one in the abstract message model. Finally we present highlights of a practical case study on TLS.","PeriodicalId":285965,"journal":{"name":"2014 IEEE 27th Computer Security Foundations Symposium","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116938212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a new technique for architecture portable software fault isolation (SFI), together with a prototype implementation in the Coq proof assistant. Unlike traditional SFI, which relies on analysis of assembly-level programs, we analyze and rewrite programs in a compiler intermediate language, the Cminor language of the Comp Cert C compiler. But like traditional SFI, the compiler remains outside of the trusted computing base. By composing our program transformer with the verified back-end of Comp Cert and leveraging Comp Cert's formally proved preservation of the behavior of safe programs, we can obtain binary modules that satisfy the SFI memory safety policy for any of Comp Cert's supported architectures (currently: Power PC, ARM, and x86-32). This allows the same SFI analysis to be used across multiple architectures, greatly simplifying the most difficult part of deploying trustworthy SFI systems.
{"title":"Portable Software Fault Isolation","authors":"Joshua A. Kroll, Gordon Stewart, A. Appel","doi":"10.1109/CSF.2014.10","DOIUrl":"https://doi.org/10.1109/CSF.2014.10","url":null,"abstract":"We present a new technique for architecture portable software fault isolation (SFI), together with a prototype implementation in the Coq proof assistant. Unlike traditional SFI, which relies on analysis of assembly-level programs, we analyze and rewrite programs in a compiler intermediate language, the Cminor language of the Comp Cert C compiler. But like traditional SFI, the compiler remains outside of the trusted computing base. By composing our program transformer with the verified back-end of Comp Cert and leveraging Comp Cert's formally proved preservation of the behavior of safe programs, we can obtain binary modules that satisfy the SFI memory safety policy for any of Comp Cert's supported architectures (currently: Power PC, ARM, and x86-32). This allows the same SFI analysis to be used across multiple architectures, greatly simplifying the most difficult part of deploying trustworthy SFI systems.","PeriodicalId":285965,"journal":{"name":"2014 IEEE 27th Computer Security Foundations Symposium","volume":"112 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128088607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Flow-sensitive analysis for information-flow control (IFC) allows data structures to have mutable security labels, i.e., labels that can change over the course of the computation. This feature is often used to boost the permissiveness of the IFC monitor, by rejecting fewer programs, and to reduce the burden of explicit label annotations. However, when added naively, in a purely dynamic setting, mutable labels can expose a high bandwidth covert channel. In this work, we present an extension for LIO-a language-based floating-label system-that safely handles flow-sensitive references. The key insight to safely manipulating the label of a reference is to not only consider the label on the data stored in the reference, i.e., the reference label, but also the label on the reference label itself. Taking this into consideration, we provide an upgrade primitive that can be used to change the label of a reference in a safe manner. To eliminate the burden of determining when a reference should be upgraded, we additionally provide a mechanism for automatic upgrades. Our approach naturally extends to a concurrent setting, not previously considered by dynamic flow-sensitive systems. For both our sequential and concurrent calculi, we prove non-interference by embedding the flow-sensitive system into the flow-insensitive LIO calculus, a surprising result on its own.
{"title":"On Dynamic Flow-Sensitive Floating-Label Systems","authors":"Pablo Buiras, D. Stefan, Alejandro Russo","doi":"10.1109/CSF.2014.13","DOIUrl":"https://doi.org/10.1109/CSF.2014.13","url":null,"abstract":"Flow-sensitive analysis for information-flow control (IFC) allows data structures to have mutable security labels, i.e., labels that can change over the course of the computation. This feature is often used to boost the permissiveness of the IFC monitor, by rejecting fewer programs, and to reduce the burden of explicit label annotations. However, when added naively, in a purely dynamic setting, mutable labels can expose a high bandwidth covert channel. In this work, we present an extension for LIO-a language-based floating-label system-that safely handles flow-sensitive references. The key insight to safely manipulating the label of a reference is to not only consider the label on the data stored in the reference, i.e., the reference label, but also the label on the reference label itself. Taking this into consideration, we provide an upgrade primitive that can be used to change the label of a reference in a safe manner. To eliminate the burden of determining when a reference should be upgraded, we additionally provide a mechanism for automatic upgrades. Our approach naturally extends to a concurrent setting, not previously considered by dynamic flow-sensitive systems. For both our sequential and concurrent calculi, we prove non-interference by embedding the flow-sensitive system into the flow-insensitive LIO calculus, a surprising result on its own.","PeriodicalId":285965,"journal":{"name":"2014 IEEE 27th Computer Security Foundations Symposium","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117106739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Privacy is a core human need, but society sometimes has the requirement to do targeted, proportionate investigations in order to provide security. To reconcile individual privacy and societal security, we explore whether we can have surveillance in a form that is verifiably accountable to citizens. This means that citizens get verifiable proofs of the quantity and nature of the surveillance that actually takes place. In our scheme, governments are held accountable for the extent to which they exercise their surveillance power, and political parties can pledge in election campaigns their intention about reducing (or increasing) this figure. We propose a general idea of accountable escrow to reconciling and balancing the requirements of individual privacy and societal security. We design a balanced crypto system for asynchronous communication (e.g., email). We propose a novel method for escrowing the decryption capability in public-key cryptography. A government can decrypt it in order to conduct targeted surveillance, but doing so necessarily puts records in a public log against which the government is held accountable.
{"title":"Balancing Societal Security and Individual Privacy: Accountable Escrow System","authors":"Jia Liu, M. Ryan, Liqun Chen","doi":"10.1109/CSF.2014.37","DOIUrl":"https://doi.org/10.1109/CSF.2014.37","url":null,"abstract":"Privacy is a core human need, but society sometimes has the requirement to do targeted, proportionate investigations in order to provide security. To reconcile individual privacy and societal security, we explore whether we can have surveillance in a form that is verifiably accountable to citizens. This means that citizens get verifiable proofs of the quantity and nature of the surveillance that actually takes place. In our scheme, governments are held accountable for the extent to which they exercise their surveillance power, and political parties can pledge in election campaigns their intention about reducing (or increasing) this figure. We propose a general idea of accountable escrow to reconciling and balancing the requirements of individual privacy and societal security. We design a balanced crypto system for asynchronous communication (e.g., email). We propose a novel method for escrowing the decryption capability in public-key cryptography. A government can decrypt it in order to conduct targeted surveillance, but doing so necessarily puts records in a public log against which the government is held accountable.","PeriodicalId":285965,"journal":{"name":"2014 IEEE 27th Computer Security Foundations Symposium","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122987675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
G. Barthe, Marco Gaboardi, E. J. G. Arias, Justin Hsu, César Kunz, Pierre-Yves Strub
Differential privacy is a rigorous, worst-case notion of privacy-preserving computation. Informally, a probabilistic program is differentially private if the participation of a single individual in the input database has a limited effect on the program's distribution on outputs. More technically, differential privacy is a quantitative 2-safety property that bounds the distance between the output distributions of a probabilistic program on adjacent inputs. Like many 2-safety properties, differential privacy lies outside the scope of traditional verification techniques. Existing approaches to enforce privacy are based on intricate, non-conventional type systems, or customized relational logics. These approaches are difficult to implement and often cumbersome to use. We present an alternative approach that verifies differential privacy by standard, non-relational reasoning on non-probabilistic programs. Our approach transforms a probabilistic program into a non-probabilistic program which simulates two executions of the original program. We prove that if the target program is correct with respect to a Hoare specification, then the original probabilistic program is differentially private. We provide a variety of examples from the differential privacy literature to demonstrate the utility of our approach. Finally, we compare our approach with existing verification techniques for privacy.
{"title":"Proving Differential Privacy in Hoare Logic","authors":"G. Barthe, Marco Gaboardi, E. J. G. Arias, Justin Hsu, César Kunz, Pierre-Yves Strub","doi":"10.1109/CSF.2014.36","DOIUrl":"https://doi.org/10.1109/CSF.2014.36","url":null,"abstract":"Differential privacy is a rigorous, worst-case notion of privacy-preserving computation. Informally, a probabilistic program is differentially private if the participation of a single individual in the input database has a limited effect on the program's distribution on outputs. More technically, differential privacy is a quantitative 2-safety property that bounds the distance between the output distributions of a probabilistic program on adjacent inputs. Like many 2-safety properties, differential privacy lies outside the scope of traditional verification techniques. Existing approaches to enforce privacy are based on intricate, non-conventional type systems, or customized relational logics. These approaches are difficult to implement and often cumbersome to use. We present an alternative approach that verifies differential privacy by standard, non-relational reasoning on non-probabilistic programs. Our approach transforms a probabilistic program into a non-probabilistic program which simulates two executions of the original program. We prove that if the target program is correct with respect to a Hoare specification, then the original probabilistic program is differentially private. We provide a variety of examples from the differential privacy literature to demonstrate the utility of our approach. Finally, we compare our approach with existing verification techniques for privacy.","PeriodicalId":285965,"journal":{"name":"2014 IEEE 27th Computer Security Foundations Symposium","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128997573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Justin Hsu, Marco Gaboardi, Andreas Haeberlen, S. Khanna, Arjun Narayan, B. Pierce, Aaron Roth
Differential privacy is becoming a gold standard notion of privacy; it offers a guaranteed bound on loss of privacy due to release of query results, even under worst-case assumptions. The theory of differential privacy is an active research area, and there are now differentially private algorithms for a wide range of problems. However, the question of when differential privacy works in practice has received relatively little attention. In particular, there is still no rigorous method for choosing the key parameter ε, which controls the crucial tradeoff between the strength of the privacy guarantee and the accuracy of the published results. In this paper, we examine the role of these parameters in concrete applications, identifying the key considerations that must be addressed when choosing specific values. This choice requires balancing the interests of two parties with conflicting objectives: the data analyst, who wishes to learn something abou the data, and the prospective participant, who must decide whether to allow their data to be included in the analysis. We propose a simple model that expresses this balance as formulas over a handful of parameters, and we use our model to choose ε on a series of simple statistical studies. We also explore a surprising insight: in some circumstances, a differentially private study can be more accurate than a non-private study for the same cost, under our model. Finally, we discuss the simplifying assumptions in our model and outline a research agenda for possible refinements.
{"title":"Differential Privacy: An Economic Method for Choosing Epsilon","authors":"Justin Hsu, Marco Gaboardi, Andreas Haeberlen, S. Khanna, Arjun Narayan, B. Pierce, Aaron Roth","doi":"10.1109/CSF.2014.35","DOIUrl":"https://doi.org/10.1109/CSF.2014.35","url":null,"abstract":"Differential privacy is becoming a gold standard notion of privacy; it offers a guaranteed bound on loss of privacy due to release of query results, even under worst-case assumptions. The theory of differential privacy is an active research area, and there are now differentially private algorithms for a wide range of problems. However, the question of when differential privacy works in practice has received relatively little attention. In particular, there is still no rigorous method for choosing the key parameter ε, which controls the crucial tradeoff between the strength of the privacy guarantee and the accuracy of the published results. In this paper, we examine the role of these parameters in concrete applications, identifying the key considerations that must be addressed when choosing specific values. This choice requires balancing the interests of two parties with conflicting objectives: the data analyst, who wishes to learn something abou the data, and the prospective participant, who must decide whether to allow their data to be included in the analysis. We propose a simple model that expresses this balance as formulas over a handful of parameters, and we use our model to choose ε on a series of simple statistical studies. We also explore a surprising insight: in some circumstances, a differentially private study can be more accurate than a non-private study for the same cost, under our model. Finally, we discuss the simplifying assumptions in our model and outline a research agenda for possible refinements.","PeriodicalId":285965,"journal":{"name":"2014 IEEE 27th Computer Security Foundations Symposium","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116686567","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Secure Web Bulletin Board (WBB) is a key component of verifiable election systems. However, there is very little in the literature on their specification, design and implementation, and there are no formally analysed designs. The WBB is used in the context of election verification to publish evidence of voting and tallying that voters and officials can check, and where challenges can be launched in the event of malfeasance. In practice, the election authority has responsibility for implementing the web bulletin board correctly and reliably, and will wish to ensure that it behaves correctly even in the presence of failures and attacks. To ensure robustness, an implementation will typically use a number of peers to be able to provide a correct service even when some peers go down or behave dishonestly. In this paper we propose a new protocol to implement such a Web Bulletin Board, motivated by the needs of the vVote verifiable voting system. Using a distributed algorithm increases the complexity of the protocol and requires careful reasoning in order to establish correctness. Here we use the Event-B modelling and refinement approach to establish correctness of the peered design against an idealised specification of the bulletin board behaviour. In particular we have shown that for n peers, a threshold of t > 2n/3 peers behaving correctly is sufficient to ensure correct behaviour of the bulletin board distributed design. The algorithm also behaves correctly even if honest or dishonest peers temporarily drop out of the protocol and then return. The verification approach also establishes that the protocols used within the bulletin board do not interfere with each other. This is the first time a peered secure web bulletin board suite of protocols has been formally verified.
{"title":"A Peered Bulletin Board for Robust Use in Verifiable Voting Systems","authors":"C. Culnane, Steve A. Schneider","doi":"10.1109/CSF.2014.20","DOIUrl":"https://doi.org/10.1109/CSF.2014.20","url":null,"abstract":"The Secure Web Bulletin Board (WBB) is a key component of verifiable election systems. However, there is very little in the literature on their specification, design and implementation, and there are no formally analysed designs. The WBB is used in the context of election verification to publish evidence of voting and tallying that voters and officials can check, and where challenges can be launched in the event of malfeasance. In practice, the election authority has responsibility for implementing the web bulletin board correctly and reliably, and will wish to ensure that it behaves correctly even in the presence of failures and attacks. To ensure robustness, an implementation will typically use a number of peers to be able to provide a correct service even when some peers go down or behave dishonestly. In this paper we propose a new protocol to implement such a Web Bulletin Board, motivated by the needs of the vVote verifiable voting system. Using a distributed algorithm increases the complexity of the protocol and requires careful reasoning in order to establish correctness. Here we use the Event-B modelling and refinement approach to establish correctness of the peered design against an idealised specification of the bulletin board behaviour. In particular we have shown that for n peers, a threshold of t > 2n/3 peers behaving correctly is sufficient to ensure correct behaviour of the bulletin board distributed design. The algorithm also behaves correctly even if honest or dishonest peers temporarily drop out of the protocol and then return. The verification approach also establishes that the protocols used within the bulletin board do not interfere with each other. This is the first time a peered secure web bulletin board suite of protocols has been formally verified.","PeriodicalId":285965,"journal":{"name":"2014 IEEE 27th Computer Security Foundations Symposium","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115623995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}