Tiffany Bao, Yan Shoshitaishvili, Ruoyu Wang, Christopher Krügel, G. Vigna, David Brumley
Automated techniques and tools for finding, exploiting and patching vulnerabilities are maturing. In order to achieve an end goal such as winning a cyber-battle, these techniques and tools must be wielded strategically. Currently, strategy development in cyber - even with automated tools - is done manually, and is a bottleneck in practice. In this paper, we apply game theory toward the augmentation of the human decision-making process.,,Our work makes two novel contributions. First, previous work is limited by strong assumptions regarding the number of actors, actions, and choices in cyber-warfare. We develop a novel model of cyber-warfare that is more comprehensive than previous work, removing these limitations in the process. Second, we present an algorithm for calculating the optimal strategy of the players in our model. We show that our model is capable of finding better solutions than previous work within seconds, making computer-time strategic reasoning a reality. We also provide new insights, compared to previous models, on the impact of optimal strategies.
{"title":"How Shall We Play a Game?: A Game-theoretical Model for Cyber-warfare Games","authors":"Tiffany Bao, Yan Shoshitaishvili, Ruoyu Wang, Christopher Krügel, G. Vigna, David Brumley","doi":"10.1109/CSF.2017.34","DOIUrl":"https://doi.org/10.1109/CSF.2017.34","url":null,"abstract":"Automated techniques and tools for finding, exploiting and patching vulnerabilities are maturing. In order to achieve an end goal such as winning a cyber-battle, these techniques and tools must be wielded strategically. Currently, strategy development in cyber - even with automated tools - is done manually, and is a bottleneck in practice. In this paper, we apply game theory toward the augmentation of the human decision-making process.,,Our work makes two novel contributions. First, previous work is limited by strong assumptions regarding the number of actors, actions, and choices in cyber-warfare. We develop a novel model of cyber-warfare that is more comprehensive than previous work, removing these limitations in the process. Second, we present an algorithm for calculating the optimal strategy of the players in our model. We show that our model is capable of finding better solutions than previous work within seconds, making computer-time strategic reasoning a reality. We also provide new insights, compared to previous models, on the impact of optimal strategies.","PeriodicalId":269696,"journal":{"name":"2017 IEEE 30th Computer Security Foundations Symposium (CSF)","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133928306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Due to the proliferation of reprogrammable hardware, core designs built from modules drawn from a variety of sources execute with direct access to critical system resources. Expressing guarantees that such modules satisfy, in particular the dynamic conditions under which they release information about their unbounded streams of inputs, and automatically proving that they satisfy such guarantees, is an open and critical problem.,,To address these challenges, we propose a domain-specific language, named STREAMS, for expressing information-flow policies with declassification over unbounded input streams. We also introduce a novel algorithm, named SIMAREL, that given a core design C and STREAMS policy P, automatically proves or falsifies that C satisfies P. The key technical insight behind the design of SIMAREL is a novel algorithm for efficiently synthesizing relational invariants over pairs of circuit executions.,,We expressed expected behavior of cores designed independently for research and production as STREAMS policies and used SIMAREL to check if each core satisfies its policy. SIMAREL proved that half of the cores satisfied expected behavior, but found unexpected information leaks in six open-source designs: an Ethernet controller, a flash memory controller, an SD-card storage manager, a robotics controller, a digital-signal processing (DSP) module, and a debugging interface.
{"title":"Proving Flow Security of Sequential Logic via Automatically-Synthesized Relational Invariants","authors":"Hyoukjun Kwon, William R. Harris, H. Esmaeilzadeh","doi":"10.1109/CSF.2017.35","DOIUrl":"https://doi.org/10.1109/CSF.2017.35","url":null,"abstract":"Due to the proliferation of reprogrammable hardware, core designs built from modules drawn from a variety of sources execute with direct access to critical system resources. Expressing guarantees that such modules satisfy, in particular the dynamic conditions under which they release information about their unbounded streams of inputs, and automatically proving that they satisfy such guarantees, is an open and critical problem.,,To address these challenges, we propose a domain-specific language, named STREAMS, for expressing information-flow policies with declassification over unbounded input streams. We also introduce a novel algorithm, named SIMAREL, that given a core design C and STREAMS policy P, automatically proves or falsifies that C satisfies P. The key technical insight behind the design of SIMAREL is a novel algorithm for efficiently synthesizing relational invariants over pairs of circuit executions.,,We expressed expected behavior of cores designed independently for research and production as STREAMS policies and used SIMAREL to check if each core satisfies its policy. SIMAREL proved that half of the cores satisfied expected behavior, but found unexpected information leaks in six open-source designs: an Ethernet controller, a flash memory controller, an SD-card storage manager, a robotics controller, a digital-signal processing (DSP) module, and a debugging interface.","PeriodicalId":269696,"journal":{"name":"2017 IEEE 30th Computer Security Foundations Symposium (CSF)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132420364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We introduce a new class of security protocols with an unbounded number of sessions and unlimited fresh data for which the problem of secrecy is decidable. The only constraint we place on the class is a notion of depthboundedness. Precisely we prove that, restricted to messages of up to a given size, secrecy is decidable for all depthbounded processes. This decidable fragment of security protocols captures many real-world symmetric key protocols, including Needham-Schroeder Symmetric Key, Otway-Rees, and Yahalom.
{"title":"Deciding Secrecy of Security Protocols for an Unbounded Number of Sessions: The Case of Depth-Bounded Processes","authors":"Emanuele D’Osualdo, C. Ong, Alwen Tiu","doi":"10.1109/CSF.2017.32","DOIUrl":"https://doi.org/10.1109/CSF.2017.32","url":null,"abstract":"We introduce a new class of security protocols with an unbounded number of sessions and unlimited fresh data for which the problem of secrecy is decidable. The only constraint we place on the class is a notion of depthboundedness. Precisely we prove that, restricted to messages of up to a given size, secrecy is decidable for all depthbounded processes. This decidable fragment of security protocols captures many real-world symmetric key protocols, including Needham-Schroeder Symmetric Key, Otway-Rees, and Yahalom.","PeriodicalId":269696,"journal":{"name":"2017 IEEE 30th Computer Security Foundations Symposium (CSF)","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116743008","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The area of secure compilation aims to design compilers which produce hardened code that can withstand attacks from low-level co-linked components. So far, there is no formal correctness criterion for secure compilers that comes with a clear understanding of what security properties the criterion actually provides. Ideally, we would like a criterion that, if fulfilled by a compiler, guarantees that large classes of security properties of source language programs continue to hold in the compiled program, even as the compiled program is run against adversaries with low-level attack capabilities. This paper provides such a novel correctness criterion for secure compilers, called trace-preserving compilation (TPC). We show that TPC preserves a large class of security properties, namely all safety hyperproperties. Further, we show that TPC preserves more properties than full abstraction, the de-facto criterion used for secure compilation. Then, we show that several fully abstract compilers described in literature satisfy an additional, common property, which implies that they also satisfy TPC. As an illustration, we prove that a fully abstract compiler from a typed source language to an untyped target language satisfies TPC.
{"title":"Secure Compilation and Hyperproperty Preservation","authors":"Marco Patrignani, D. Garg","doi":"10.1109/CSF.2017.13","DOIUrl":"https://doi.org/10.1109/CSF.2017.13","url":null,"abstract":"The area of secure compilation aims to design compilers which produce hardened code that can withstand attacks from low-level co-linked components. So far, there is no formal correctness criterion for secure compilers that comes with a clear understanding of what security properties the criterion actually provides. Ideally, we would like a criterion that, if fulfilled by a compiler, guarantees that large classes of security properties of source language programs continue to hold in the compiled program, even as the compiled program is run against adversaries with low-level attack capabilities. This paper provides such a novel correctness criterion for secure compilers, called trace-preserving compilation (TPC). We show that TPC preserves a large class of security properties, namely all safety hyperproperties. Further, we show that TPC preserves more properties than full abstraction, the de-facto criterion used for secure compilation. Then, we show that several fully abstract compilers described in literature satisfy an additional, common property, which implies that they also satisfy TPC. As an illustration, we prove that a fully abstract compiler from a typed source language to an untyped target language satisfies TPC.","PeriodicalId":269696,"journal":{"name":"2017 IEEE 30th Computer Security Foundations Symposium (CSF)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116390555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present the first formal analysis of two avionic protocols that aim to secure air-ground communications, the ARINC823 public-key and shared-key protocols. We verify these protocols both in the symbolic model of cryptography, using ProVerif, and in the computational model, using CryptoVerif. While we confirm many security properties of these protocols, we also find several weaknesses, attacks, and imprecisions in the standard. We propose fixes for these problems. This case study required the specification of new cryptographic primitives in CryptoVerif. It also illustrates the complementarity between symbolic and computational verification.
{"title":"Symbolic and Computational Mechanized Verification of the ARINC823 Avionic Protocols","authors":"B. Blanchet","doi":"10.1109/CSF.2017.7","DOIUrl":"https://doi.org/10.1109/CSF.2017.7","url":null,"abstract":"We present the first formal analysis of two avionic protocols that aim to secure air-ground communications, the ARINC823 public-key and shared-key protocols. We verify these protocols both in the symbolic model of cryptography, using ProVerif, and in the computational model, using CryptoVerif. While we confirm many security properties of these protocols, we also find several weaknesses, attacks, and imprecisions in the standard. We propose fixes for these problems. This case study required the specification of new cryptographic primitives in CryptoVerif. It also illustrates the complementarity between symbolic and computational verification.","PeriodicalId":269696,"journal":{"name":"2017 IEEE 30th Computer Security Foundations Symposium (CSF)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124158294","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We report on our research on proving the security of multi-party cryptographic protocols using the EASYCRYPT proof assistant. We work in the computational model using the sequence of games approach, and define honest-butcurious (semi-honest) security using a variation of the real/ideal paradigm in which, for each protocol party, an adversary chooses protocol inputs in an attempt to distinguish the party’s real and ideal games. Our proofs are information-theoretic, instead of being based on complexity theory and computational assumptions. We employ oracles (e.g., random oracles for hashing) whose encapsulated states depend on dynamically-made, nonprogrammable random choices. By limiting an adversary’s oracle use, one may obtain concrete upper bounds on the distances between a party’s real and ideal games that are expressed in terms of game parameters. Furthermore, our proofs work for adaptive adversaries, ones that, when choosing the value of a protocol input, may condition this choice on their current protocol view and oracle knowledge. We provide an analysis in EASYCRYPT of a three party private count retrieval protocol. We emphasize the lessons learned from completing this proof.
{"title":"Mechanizing the Proof of Adaptive, Information-Theoretic Security of Cryptographic Protocols in the Random Oracle Model","authors":"Alley Stoughton, Mayank Varia","doi":"10.1109/CSF.2017.36","DOIUrl":"https://doi.org/10.1109/CSF.2017.36","url":null,"abstract":"We report on our research on proving the security of multi-party cryptographic protocols using the EASYCRYPT proof assistant. We work in the computational model using the sequence of games approach, and define honest-butcurious (semi-honest) security using a variation of the real/ideal paradigm in which, for each protocol party, an adversary chooses protocol inputs in an attempt to distinguish the party’s real and ideal games. Our proofs are information-theoretic, instead of being based on complexity theory and computational assumptions. We employ oracles (e.g., random oracles for hashing) whose encapsulated states depend on dynamically-made, nonprogrammable random choices. By limiting an adversary’s oracle use, one may obtain concrete upper bounds on the distances between a party’s real and ideal games that are expressed in terms of game parameters. Furthermore, our proofs work for adaptive adversaries, ones that, when choosing the value of a protocol input, may condition this choice on their current protocol view and oracle knowledge. We provide an analysis in EASYCRYPT of a three party private count retrieval protocol. We emphasize the lessons learned from completing this proof.","PeriodicalId":269696,"journal":{"name":"2017 IEEE 30th Computer Security Foundations Symposium (CSF)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128470179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Martín Abadi, Ú. Erlingsson, I. Goodfellow, H. B. McMahan, Ilya Mironov, Nicolas Papernot, Kunal Talwar, Li Zhang
The recent, remarkable growth of machine learning has led to intense interest in the privacy of the data on which machine learning relies, and to new techniques for preserving privacy. However, older ideas about privacy may well remain valid and useful. This note reviews two recent works on privacy in the light of the wisdom of some of the early literature, in particular the principles distilled by Saltzer and Schroeder in the 1970s.
{"title":"On the Protection of Private Information in Machine Learning Systems: Two Recent Approches","authors":"Martín Abadi, Ú. Erlingsson, I. Goodfellow, H. B. McMahan, Ilya Mironov, Nicolas Papernot, Kunal Talwar, Li Zhang","doi":"10.1109/CSF.2017.10","DOIUrl":"https://doi.org/10.1109/CSF.2017.10","url":null,"abstract":"The recent, remarkable growth of machine learning has led to intense interest in the privacy of the data on which machine learning relies, and to new techniques for preserving privacy. However, older ideas about privacy may well remain valid and useful. This note reviews two recent works on privacy in the light of the wisdom of some of the early literature, in particular the principles distilled by Saltzer and Schroeder in the 1970s.","PeriodicalId":269696,"journal":{"name":"2017 IEEE 30th Computer Security Foundations Symposium (CSF)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117173761","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Location-based services have seen tremendous developments over the recent years. These services have revolutionized transportation business, as witnessed by the success of Uber, Lyft, BlaBlaCar, and the like. Yet from the privacy point of view, the state of the art leaves much to be desired. The location of the user is typically shared with the service, opening up for privacy abuse, as in some recently publicized cases. This paper proposes PrivatePool, a model for privacy-preserving ridesharing. We develop secure multi-party computation techniques for endpoint and trajectory matching that allow dispensing with trust to third parties. At the same time, the users learn of a ride segment they can share and nothing else about other users' location. We establish formal privacy guarantees and investigate how different riding patterns affect the privacy, utility, and performance trade-offs between approaches based on the proximity of endpoints vs. proximity of trajectories.
{"title":"PrivatePool: Privacy-Preserving Ridesharing","authors":"Per A. Hallgren, Claudio Orlandi, A. Sabelfeld","doi":"10.1109/CSF.2017.24","DOIUrl":"https://doi.org/10.1109/CSF.2017.24","url":null,"abstract":"Location-based services have seen tremendous developments over the recent years. These services have revolutionized transportation business, as witnessed by the success of Uber, Lyft, BlaBlaCar, and the like. Yet from the privacy point of view, the state of the art leaves much to be desired. The location of the user is typically shared with the service, opening up for privacy abuse, as in some recently publicized cases. This paper proposes PrivatePool, a model for privacy-preserving ridesharing. We develop secure multi-party computation techniques for endpoint and trajectory matching that allow dispensing with trust to third parties. At the same time, the users learn of a ride segment they can share and nothing else about other users' location. We establish formal privacy guarantees and investigate how different riding patterns affect the privacy, utility, and performance trade-offs between approaches based on the proximity of endpoints vs. proximity of trajectories.","PeriodicalId":269696,"journal":{"name":"2017 IEEE 30th Computer Security Foundations Symposium (CSF)","volume":"192 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114821391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We investigate a problem in quantitative information flow, namely to find the maximum information leakage caused by n repeated independent runs of a channel C with b columns. While this scenario is of general interest, it is particularly motivated by the study of timing attacks on cryptography implemented using the countermeasures known as blinding and bucketing. We measure leakage in terms of multiplicative Bayes capacity (also known as min-capacity) and we prove tight bounds that greatly improve the previously-known ones. To enable efficient computation of our new bounds, we investigate them using techniques of analytic combinatorics, proving that they satisfy a useful recurrence and (when b = 2) a close connection to Ramanujan's Q-function.
{"title":"Tight Bounds on Information Leakage from Repeated Independent Runs","authors":"David M. Smith, Geoffrey Smith","doi":"10.1109/CSF.2017.18","DOIUrl":"https://doi.org/10.1109/CSF.2017.18","url":null,"abstract":"We investigate a problem in quantitative information flow, namely to find the maximum information leakage caused by n repeated independent runs of a channel C with b columns. While this scenario is of general interest, it is particularly motivated by the study of timing attacks on cryptography implemented using the countermeasures known as blinding and bucketing. We measure leakage in terms of multiplicative Bayes capacity (also known as min-capacity) and we prove tight bounds that greatly improve the previously-known ones. To enable efficient computation of our new bounds, we investigate them using techniques of analytic combinatorics, proving that they satisfy a useful recurrence and (when b = 2) a close connection to Ramanujan's Q-function.","PeriodicalId":269696,"journal":{"name":"2017 IEEE 30th Computer Security Foundations Symposium (CSF)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114122518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We develop foundations and several constructions for security protocols that can automatically detect, without false positives, if a secret (such as a key or password) has been misused. Such constructions can be used, e.g., to automatically shut down compromised services, or to automatically revoke misused secrets to minimize the effects of compromise. Our threat model includes malicious agents, (temporarily or permanently) compromised agents, and clones.Previous works have studied domain-specific partial solutions to this problem. For example, Google's Certificate Transparency aims to provide infrastructure to detect the misuse of a certificate authority's signing key, logs have been used for detecting endpoint compromise, and protocols have been proposed to detect cloned RFID/smart cards. Contrary to these existing approaches, for which the designs are interwoven with domain-specific considerations and which usually do not enable fully automatic response (i.e., they need human assessment), our approach shows where automatic action is possible. Our results unify, provide design rationales, and suggest improvements for the existing domain-specific solutions.Based on our analysis, we construct several mechanisms for the detection of misuse. Our mechanisms enable automatic response, such as revoking keys or shutting down services, thereby substantially limiting the impact of a compromise.In several case studies, we show how our mechanisms can be used to substantially increase the security guarantees of a wide range of systems, such as web logins, payment systems, or electronic door locks. For example, we propose and formally verify an improved version of Cloudflare's Keyless SSL protocol that enables key misuse detection.
{"title":"Automatically Detecting the Misuse of Secrets: Foundations, Design Principles, and Applications","authors":"Kevin Milner, C. Cremers, Jiangshan Yu, M. Ryan","doi":"10.1109/CSF.2017.21","DOIUrl":"https://doi.org/10.1109/CSF.2017.21","url":null,"abstract":"We develop foundations and several constructions for security protocols that can automatically detect, without false positives, if a secret (such as a key or password) has been misused. Such constructions can be used, e.g., to automatically shut down compromised services, or to automatically revoke misused secrets to minimize the effects of compromise. Our threat model includes malicious agents, (temporarily or permanently) compromised agents, and clones.Previous works have studied domain-specific partial solutions to this problem. For example, Google's Certificate Transparency aims to provide infrastructure to detect the misuse of a certificate authority's signing key, logs have been used for detecting endpoint compromise, and protocols have been proposed to detect cloned RFID/smart cards. Contrary to these existing approaches, for which the designs are interwoven with domain-specific considerations and which usually do not enable fully automatic response (i.e., they need human assessment), our approach shows where automatic action is possible. Our results unify, provide design rationales, and suggest improvements for the existing domain-specific solutions.Based on our analysis, we construct several mechanisms for the detection of misuse. Our mechanisms enable automatic response, such as revoking keys or shutting down services, thereby substantially limiting the impact of a compromise.In several case studies, we show how our mechanisms can be used to substantially increase the security guarantees of a wide range of systems, such as web logins, payment systems, or electronic door locks. For example, we propose and formally verify an improved version of Cloudflare's Keyless SSL protocol that enables key misuse detection.","PeriodicalId":269696,"journal":{"name":"2017 IEEE 30th Computer Security Foundations Symposium (CSF)","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121081737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}