Pub Date : 2021-05-01DOI: 10.1109/SP40001.2021.00028
Kurt Thomas, Devdatta Akhawe, Michael Bailey, D. Boneh, Elie Bursztein, Sunny Consolvo, Nicola Dell, Z. Durumeric, Patrick Gage Kelley, Deepak Kumar, Damon McCoy, S. Meiklejohn, T. Ristenpart, G. Stringhini
We argue that existing security, privacy, and antiabuse protections fail to address the growing threat of online hate and harassment. In order for our community to understand and address this gap, we propose a taxonomy for reasoning about online hate and harassment. Our taxonomy draws on over 150 interdisciplinary research papers that cover disparate threats ranging from intimate partner violence to coordinated mobs. In the process, we identify seven classes of attacks—such as toxic content and surveillance—that each stem from different attacker capabilities and intents. We also provide longitudinal evidence from a three-year survey that hate and harassment is a pervasive, growing experience for online users, particularly for at-risk communities like young adults and people who identify as LGBTQ+. Responding to each class of hate and harassment requires a unique strategy and we highlight five such potential research directions that ultimately empower individuals, communities, and platforms to do so.
{"title":"SoK: Hate, Harassment, and the Changing Landscape of Online Abuse","authors":"Kurt Thomas, Devdatta Akhawe, Michael Bailey, D. Boneh, Elie Bursztein, Sunny Consolvo, Nicola Dell, Z. Durumeric, Patrick Gage Kelley, Deepak Kumar, Damon McCoy, S. Meiklejohn, T. Ristenpart, G. Stringhini","doi":"10.1109/SP40001.2021.00028","DOIUrl":"https://doi.org/10.1109/SP40001.2021.00028","url":null,"abstract":"We argue that existing security, privacy, and antiabuse protections fail to address the growing threat of online hate and harassment. In order for our community to understand and address this gap, we propose a taxonomy for reasoning about online hate and harassment. Our taxonomy draws on over 150 interdisciplinary research papers that cover disparate threats ranging from intimate partner violence to coordinated mobs. In the process, we identify seven classes of attacks—such as toxic content and surveillance—that each stem from different attacker capabilities and intents. We also provide longitudinal evidence from a three-year survey that hate and harassment is a pervasive, growing experience for online users, particularly for at-risk communities like young adults and people who identify as LGBTQ+. Responding to each class of hate and harassment requires a unique strategy and we highlight five such potential research directions that ultimately empower individuals, communities, and platforms to do so.","PeriodicalId":6786,"journal":{"name":"2021 IEEE Symposium on Security and Privacy (SP)","volume":"103 1","pages":"247-267"},"PeriodicalIF":0.0,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80417856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-05-01DOI: 10.1109/SP40001.2021.00078
David Baelde, S. Delaune, Charlie Jacomme, Adrien Koutsos, Solène Moreau
Given the central importance of designing secure protocols, providing solid mathematical foundations and computer-assisted methods to attest for their correctness is becoming crucial. Here, we elaborate on the formal approach introduced by Bana and Comon in [10], [11], which was originally designed to analyze protocols for a fixed number of sessions, and lacks support for proof mechanization.In this paper, we present a framework and an interactive prover allowing to mechanize proofs of security protocols for an arbitrary number of sessions in the computational model. More specifically, we develop a meta-logic as well as a proof system for deriving security properties. Proofs in our system only deal with high-level, symbolic representations of protocol executions, similar to proofs in the symbolic model, but providing security guarantees at the computational level. We have implemented our approach within a new interactive prover, the Squirrel prover, taking as input protocols specified in the applied pi-calculus, and we have performed a number of case studies covering a variety of primitives (hashes, encryption, signatures, Diffie-Hellman exponentiation) and security properties (authentication, strong secrecy, unlinkability).
{"title":"An Interactive Prover for Protocol Verification in the Computational Model","authors":"David Baelde, S. Delaune, Charlie Jacomme, Adrien Koutsos, Solène Moreau","doi":"10.1109/SP40001.2021.00078","DOIUrl":"https://doi.org/10.1109/SP40001.2021.00078","url":null,"abstract":"Given the central importance of designing secure protocols, providing solid mathematical foundations and computer-assisted methods to attest for their correctness is becoming crucial. Here, we elaborate on the formal approach introduced by Bana and Comon in [10], [11], which was originally designed to analyze protocols for a fixed number of sessions, and lacks support for proof mechanization.In this paper, we present a framework and an interactive prover allowing to mechanize proofs of security protocols for an arbitrary number of sessions in the computational model. More specifically, we develop a meta-logic as well as a proof system for deriving security properties. Proofs in our system only deal with high-level, symbolic representations of protocol executions, similar to proofs in the symbolic model, but providing security guarantees at the computational level. We have implemented our approach within a new interactive prover, the Squirrel prover, taking as input protocols specified in the applied pi-calculus, and we have performed a number of case studies covering a variety of primitives (hashes, encryption, signatures, Diffie-Hellman exponentiation) and security properties (authentication, strong secrecy, unlinkability).","PeriodicalId":6786,"journal":{"name":"2021 IEEE Symposium on Security and Privacy (SP)","volume":"109 1","pages":"537-554"},"PeriodicalIF":0.0,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74663248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-05-01DOI: 10.1109/SP40001.2021.00093
C. Cremers, Samed Düzlü, Rune Fiedler, M. Fischlin, Christian Janson
Modern digital signature schemes can provide more guarantees than the standard notion of (strong) unforgeability, such as offering security even in the presence of maliciously generated keys, or requiring to know a message to produce a signature for it. The use of signature schemes that lack these properties has previously enabled attacks on real-world protocols. In this work we revisit several of these notions beyond unforgeability, establish relations among them, provide the first formal definition of non re-signability, and a transformation that can provide these properties for a given signature scheme in a provable and efficient way.Our results are not only relevant for established schemes: for example, the ongoing NIST PQC competition towards standardizing post-quantum signature schemes has six finalists in its third round. We perform an in-depth analysis of the candidates with respect to their security properties beyond unforgeability. We show that many of them do not yet offer these stronger guarantees, which implies that the security guarantees of these post-quantum schemes are not strictly stronger than, but instead incomparable to, classical signature schemes. We show how applying our transformation would efficiently solve this, paving the way for the standardized schemes to provide these additional guarantees and thereby making them harder to misuse.
{"title":"BUFFing signature schemes beyond unforgeability and the case of post-quantum signatures","authors":"C. Cremers, Samed Düzlü, Rune Fiedler, M. Fischlin, Christian Janson","doi":"10.1109/SP40001.2021.00093","DOIUrl":"https://doi.org/10.1109/SP40001.2021.00093","url":null,"abstract":"Modern digital signature schemes can provide more guarantees than the standard notion of (strong) unforgeability, such as offering security even in the presence of maliciously generated keys, or requiring to know a message to produce a signature for it. The use of signature schemes that lack these properties has previously enabled attacks on real-world protocols. In this work we revisit several of these notions beyond unforgeability, establish relations among them, provide the first formal definition of non re-signability, and a transformation that can provide these properties for a given signature scheme in a provable and efficient way.Our results are not only relevant for established schemes: for example, the ongoing NIST PQC competition towards standardizing post-quantum signature schemes has six finalists in its third round. We perform an in-depth analysis of the candidates with respect to their security properties beyond unforgeability. We show that many of them do not yet offer these stronger guarantees, which implies that the security guarantees of these post-quantum schemes are not strictly stronger than, but instead incomparable to, classical signature schemes. We show how applying our transformation would efficiently solve this, paving the way for the standardized schemes to provide these additional guarantees and thereby making them harder to misuse.","PeriodicalId":6786,"journal":{"name":"2021 IEEE Symposium on Security and Privacy (SP)","volume":"74 1","pages":"1696-1714"},"PeriodicalIF":0.0,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87085755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-05-01DOI: 10.1109/SP40001.2021.00072
S. Kamara, Tarik Moataz, A. Park, Lucy Qin
Gun violence results in a significant number of deaths in the United States. Starting in the 1960’s, the US Congress passed a series of gun control laws to regulate the sale and use of firearms. One of the most important but politically fraught gun control measures is a national gun registry. A US Senate office is currently drafting legislation that proposes the creation of a voluntary national gun registration system. At a high level, the bill envisions a decentralized system where local county officials would control and manage the registration data of their constituents. These local databases could then be queried by other officials and law enforcement to trace guns. Due to the sensitive nature of this data, however, these databases should guarantee the confidentiality of the data.In this work, we translate the high-level vision of the proposed legislation into technical requirements and design a crypto- graphic protocol that meets them. Roughly speaking, the protocol can be viewed as a decentralized system of locally-managed end-to-end encrypted databases. Our design relies on various cryptographic building blocks including structured encryption, secure multi-party computation and secret sharing. We propose a formal security definition and prove that our design meets it. We implemented our protocol and evaluated its performance empirically at the scale it would have to run if it were deployed in the United States. Our results show that a decentralized and end-to-end encrypted national gun registry is not only possible in theory but feasible in practice.
{"title":"A Decentralized and Encrypted National Gun Registry","authors":"S. Kamara, Tarik Moataz, A. Park, Lucy Qin","doi":"10.1109/SP40001.2021.00072","DOIUrl":"https://doi.org/10.1109/SP40001.2021.00072","url":null,"abstract":"Gun violence results in a significant number of deaths in the United States. Starting in the 1960’s, the US Congress passed a series of gun control laws to regulate the sale and use of firearms. One of the most important but politically fraught gun control measures is a national gun registry. A US Senate office is currently drafting legislation that proposes the creation of a voluntary national gun registration system. At a high level, the bill envisions a decentralized system where local county officials would control and manage the registration data of their constituents. These local databases could then be queried by other officials and law enforcement to trace guns. Due to the sensitive nature of this data, however, these databases should guarantee the confidentiality of the data.In this work, we translate the high-level vision of the proposed legislation into technical requirements and design a crypto- graphic protocol that meets them. Roughly speaking, the protocol can be viewed as a decentralized system of locally-managed end-to-end encrypted databases. Our design relies on various cryptographic building blocks including structured encryption, secure multi-party computation and secret sharing. We propose a formal security definition and prove that our design meets it. We implemented our protocol and evaluated its performance empirically at the scale it would have to run if it were deployed in the United States. Our results show that a decentralized and end-to-end encrypted national gun registry is not only possible in theory but feasible in practice.","PeriodicalId":6786,"journal":{"name":"2021 IEEE Symposium on Security and Privacy (SP)","volume":"25 1","pages":"1520-1537"},"PeriodicalIF":0.0,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91428329","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-05-01DOI: 10.1109/SP40001.2021.00033
Thomas Haines, R. Goré, Bhavesh Sharma
Verifiable mix nets, and specifically proofs of (correct) shuffle, are a fundamental building block in numerous applications: these zero-knowledge proofs allow the prover to produce a public transcript which can be perused by the verifier to confirm the purported shuffle. They are particularly vital to verifiable electronic voting, where they underpin almost all voting schemes with non-trivial tallying methods. These complicated pieces of cryptography are a prime location for critical errors which might allow undetected modification of the outcome.The best solution to preventing these errors is to machine-check the cryptographic properties of the design and implementation of the mix net. Particularly crucial for the integrity of the outcome is the soundness of the design and implementation of the verifier (software). Unfortunately, several different encryption schemes are used in many different slight variations which makes it infeasible to machine-check every single case individually. However, a particular optimised variant of the Terelius-Wikström mix net is, and has been, widely deployed in elections including national elections in Norway, Estonia and Switzerland, albeit with many slight variations and several different encryption schemes.In this work, we develop the logical theory and formal methods tools to machine-check the design and implementation of all these variants of Terelius-Wikström mix nets, for all the different encryption schemes used; resulting in provably correct mix nets for all these different variations. We do this carefully to ensure that we can extract a formally verified implementation of the verifier (software) which is compatible with existing deployed implementations of the Terelius-Wikström mix net. This gives us provably correct implementations of the verifiers for more than half of the national elections which have used verifiable mix nets.Our implementation of a proof of correct shuffle is the first to be machine-checked to be cryptographically correct and able to verify proof transcripts from national elections. We demonstrate the practicality of our implementation by verifying transcripts produced by the Verificatum mix net system and the CHVote e-voting system from Switzerland.
{"title":"Did you mix me? Formally Verifying Verifiable Mix Nets in Electronic Voting","authors":"Thomas Haines, R. Goré, Bhavesh Sharma","doi":"10.1109/SP40001.2021.00033","DOIUrl":"https://doi.org/10.1109/SP40001.2021.00033","url":null,"abstract":"Verifiable mix nets, and specifically proofs of (correct) shuffle, are a fundamental building block in numerous applications: these zero-knowledge proofs allow the prover to produce a public transcript which can be perused by the verifier to confirm the purported shuffle. They are particularly vital to verifiable electronic voting, where they underpin almost all voting schemes with non-trivial tallying methods. These complicated pieces of cryptography are a prime location for critical errors which might allow undetected modification of the outcome.The best solution to preventing these errors is to machine-check the cryptographic properties of the design and implementation of the mix net. Particularly crucial for the integrity of the outcome is the soundness of the design and implementation of the verifier (software). Unfortunately, several different encryption schemes are used in many different slight variations which makes it infeasible to machine-check every single case individually. However, a particular optimised variant of the Terelius-Wikström mix net is, and has been, widely deployed in elections including national elections in Norway, Estonia and Switzerland, albeit with many slight variations and several different encryption schemes.In this work, we develop the logical theory and formal methods tools to machine-check the design and implementation of all these variants of Terelius-Wikström mix nets, for all the different encryption schemes used; resulting in provably correct mix nets for all these different variations. We do this carefully to ensure that we can extract a formally verified implementation of the verifier (software) which is compatible with existing deployed implementations of the Terelius-Wikström mix net. This gives us provably correct implementations of the verifiers for more than half of the national elections which have used verifiable mix nets.Our implementation of a proof of correct shuffle is the first to be machine-checked to be cryptographically correct and able to verify proof transcripts from national elections. We demonstrate the practicality of our implementation by verifying transcripts produced by the Verificatum mix net system and the CHVote e-voting system from Switzerland.","PeriodicalId":6786,"journal":{"name":"2021 IEEE Symposium on Security and Privacy (SP)","volume":"1 1","pages":"1748-1765"},"PeriodicalIF":0.0,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90899736","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-05-01DOI: 10.1109/SP40001.2021.00099
Nicholas Carlini, Samuel Deng, Sanjam Garg, S. Jha, Saeed Mahloujifar, Mohammad Mahmoody, Florian Tramèr
A private machine learning algorithm hides as much as possible about its training data while still preserving accuracy. In this work, we study whether a non-private learning algorithm can be made private by relying on an instance-encoding mechanism that modifies the training inputs before feeding them to a normal learner. We formalize both the notion of instance encoding and its privacy by providing two attack models. We first prove impossibility results for achieving a (stronger) model. Next, we demonstrate practical attacks in the second (weaker) attack model on InstaHide, a recent proposal by Huang, Song, Li and Arora [ICML’20] that aims to use instance encoding for privacy.
{"title":"Is Private Learning Possible with Instance Encoding?","authors":"Nicholas Carlini, Samuel Deng, Sanjam Garg, S. Jha, Saeed Mahloujifar, Mohammad Mahmoody, Florian Tramèr","doi":"10.1109/SP40001.2021.00099","DOIUrl":"https://doi.org/10.1109/SP40001.2021.00099","url":null,"abstract":"A private machine learning algorithm hides as much as possible about its training data while still preserving accuracy. In this work, we study whether a non-private learning algorithm can be made private by relying on an instance-encoding mechanism that modifies the training inputs before feeding them to a normal learner. We formalize both the notion of instance encoding and its privacy by providing two attack models. We first prove impossibility results for achieving a (stronger) model. Next, we demonstrate practical attacks in the second (weaker) attack model on InstaHide, a recent proposal by Huang, Song, Li and Arora [ICML’20] that aims to use instance encoding for privacy.","PeriodicalId":6786,"journal":{"name":"2021 IEEE Symposium on Security and Privacy (SP)","volume":"76 1","pages":"410-427"},"PeriodicalIF":0.0,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85520491","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-05-01DOI: 10.1109/SP40001.2021.00079
Xigao Li, Babak Amin Azad, Amir Rahmati, Nick Nikiforakis
As the web keeps increasing in size, the number of vulnerable and poorly-managed websites increases commensurately. Attackers rely on armies of malicious bots to discover these vulnerable websites, compromising their servers, and exfiltrating sensitive user data. It is, therefore, crucial for the security of the web to understand the population and behavior of malicious bots.In this paper, we report on the design, implementation, and results of Aristaeus, a system for deploying large numbers of "honeysites", i.e., websites that exist for the sole purpose of attracting and recording bot traffic. Through a seven-month-long experiment with 100 dedicated honeysites, Aristaeus recorded 26.4 million requests sent by more than 287K unique IP addresses, with 76,396 of them belonging to clearly malicious bots. By analyzing the type of requests and payloads that these bots send, we discover that the average honeysite received more than 37K requests each month, with more than 50% of these requests attempting to brute-force credentials, fingerprint the deployed web applications, and exploit large numbers of different vulnerabilities. By comparing the declared identity of these bots with their TLS handshakes and HTTP headers, we uncover that more than 86.2% of bots are claiming to be Mozilla Firefox and Google Chrome, yet are built on simple HTTP libraries and command-line tools.
{"title":"Good Bot, Bad Bot: Characterizing Automated Browsing Activity","authors":"Xigao Li, Babak Amin Azad, Amir Rahmati, Nick Nikiforakis","doi":"10.1109/SP40001.2021.00079","DOIUrl":"https://doi.org/10.1109/SP40001.2021.00079","url":null,"abstract":"As the web keeps increasing in size, the number of vulnerable and poorly-managed websites increases commensurately. Attackers rely on armies of malicious bots to discover these vulnerable websites, compromising their servers, and exfiltrating sensitive user data. It is, therefore, crucial for the security of the web to understand the population and behavior of malicious bots.In this paper, we report on the design, implementation, and results of Aristaeus, a system for deploying large numbers of \"honeysites\", i.e., websites that exist for the sole purpose of attracting and recording bot traffic. Through a seven-month-long experiment with 100 dedicated honeysites, Aristaeus recorded 26.4 million requests sent by more than 287K unique IP addresses, with 76,396 of them belonging to clearly malicious bots. By analyzing the type of requests and payloads that these bots send, we discover that the average honeysite received more than 37K requests each month, with more than 50% of these requests attempting to brute-force credentials, fingerprint the deployed web applications, and exploit large numbers of different vulnerabilities. By comparing the declared identity of these bots with their TLS handshakes and HTTP headers, we uncover that more than 86.2% of bots are claiming to be Mozilla Firefox and Google Chrome, yet are built on simple HTTP libraries and command-line tools.","PeriodicalId":6786,"journal":{"name":"2021 IEEE Symposium on Security and Privacy (SP)","volume":"16 1","pages":"1589-1605"},"PeriodicalIF":0.0,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87874117","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-05-01DOI: 10.1109/SP40001.2021.00094
Nicolas Huaman, Sabrina Amft, Marten Oltrogge, Y. Acar, S. Fahl
Password managers are tools to support users with the secure generation and storage of credentials and logins used in online accounts. Previous work illustrated that building password managers means facing various security and usability challenges. For strong security and good usability, the interaction between password managers and websites needs to be smooth and effortless. However, user reviews for popular password managers suggest interaction problems for some websites. Therefore, to the best of our knowledge, this work is the first to systematically identify these interaction problems and investigate how 15 desktop password managers, including the ten most popular ones, are affected. We use a qualitative analysis approach to identify 39 interaction problems from 2,947 user reviews and 372 GitHub issues for 30 password managers. Next, we implement minimal working examples (MWEs) for all interaction problems we found and evaluate them for all password managers in 585 test cases.Our results illustrate that a) password managers struggle to correctly implement authentication features such as HTTP Basic Authentication and modern standards such as the autocomplete-attribute and b) websites fail to implement clean and well-structured authentication forms. We conclude that some of our findings can be addressed by either PWM providers or web-developers by adhering to already existing standards, recommendations and best practices, while other cases are currently almost impossible to implement securely and require further research.
{"title":"They Would do Better if They Worked Together: The Case of Interaction Problems Between Password Managers and Websites","authors":"Nicolas Huaman, Sabrina Amft, Marten Oltrogge, Y. Acar, S. Fahl","doi":"10.1109/SP40001.2021.00094","DOIUrl":"https://doi.org/10.1109/SP40001.2021.00094","url":null,"abstract":"Password managers are tools to support users with the secure generation and storage of credentials and logins used in online accounts. Previous work illustrated that building password managers means facing various security and usability challenges. For strong security and good usability, the interaction between password managers and websites needs to be smooth and effortless. However, user reviews for popular password managers suggest interaction problems for some websites. Therefore, to the best of our knowledge, this work is the first to systematically identify these interaction problems and investigate how 15 desktop password managers, including the ten most popular ones, are affected. We use a qualitative analysis approach to identify 39 interaction problems from 2,947 user reviews and 372 GitHub issues for 30 password managers. Next, we implement minimal working examples (MWEs) for all interaction problems we found and evaluate them for all password managers in 585 test cases.Our results illustrate that a) password managers struggle to correctly implement authentication features such as HTTP Basic Authentication and modern standards such as the autocomplete-attribute and b) websites fail to implement clean and well-structured authentication forms. We conclude that some of our findings can be addressed by either PWM providers or web-developers by adhering to already existing standards, recommendations and best practices, while other cases are currently almost impossible to implement securely and require further research.","PeriodicalId":6786,"journal":{"name":"2021 IEEE Symposium on Security and Privacy (SP)","volume":"62 1","pages":"1367-1381"},"PeriodicalIF":0.0,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89498728","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Proactive security is the notion of defending a distributed system against an attacker who compromises different devices through its lifetime, but no more than a threshold number of them at any given time. The emergence of threshold wallets for more secure cryptocurrency custody warrants an efficient proactivization protocol tailored to this setting. While many proactivization protocols have been devised and studied in the literature, none of them have communication patterns ideal for threshold wallets. In particular a (t, n) threshold wallet is designed to have t parties jointly sign a transaction (of which only one may be honest) whereas even the best current proactivization protocols require at least an additional t−1 honest parties to come online simultaneously to refresh the system.In this work we formulate the notion of refresh with offline devices, where any tρ parties may proactivize the system at any time and the remaining n−tρ offline parties can non-interactively "catch up" at their leisure. However, many subtle issues arise in realizing this pattern. We identify that this problem is divided into two settings: (2, n) and (t, n) where t > 2. We develop novel techniques to address both settings as follows:•We show that the (2, n) setting permits a tight tρ for refresh. In particular we give a highly efficient tρ = 2 protocol to upgrade a number of standard (2, n) threshold signature schemes to proactive security with offline refresh. This protocol can augment existing implementations of threshold wallets for immediate use– we show that proactivization does not have to interfere with their native mode of operation. This technique is compatible with Schnorr, EdDSA, and with some effort even sophisticated ECDSA protocols. By implementation we show that proactivizing two different recent (2, n) ECDSA protocols incurs only 14% and 24% computational overhead respectively, less than 200 bytes, and no extra round of communication.•For the general (t, n) setting we prove that it is impossible to construct an offline refresh protocol with tρ < 2(t−1), i.e. tolerating a dishonest majority of online parties. Our techniques are novel in reasoning about the message complexity of proactive security, and may be of independent interest.Our results are positive for small-scale decentralization (such as 2FA with threshold wallets), and negative for large-scale distributed systems with higher thresholds. We thus initiate the study of proactive security with offline refresh, with a comprehensive treatment of the dishonest majority case.
{"title":"Refresh When You Wake Up: Proactive Threshold Wallets with Offline Devices","authors":"Yashvanth Kondi, Bernardo Magri, Claudio Orlandi, Omer Shlomovits","doi":"10.1109/SP40001.2021.00067","DOIUrl":"https://doi.org/10.1109/SP40001.2021.00067","url":null,"abstract":"Proactive security is the notion of defending a distributed system against an attacker who compromises different devices through its lifetime, but no more than a threshold number of them at any given time. The emergence of threshold wallets for more secure cryptocurrency custody warrants an efficient proactivization protocol tailored to this setting. While many proactivization protocols have been devised and studied in the literature, none of them have communication patterns ideal for threshold wallets. In particular a (t, n) threshold wallet is designed to have t parties jointly sign a transaction (of which only one may be honest) whereas even the best current proactivization protocols require at least an additional t−1 honest parties to come online simultaneously to refresh the system.In this work we formulate the notion of refresh with offline devices, where any tρ parties may proactivize the system at any time and the remaining n−tρ offline parties can non-interactively \"catch up\" at their leisure. However, many subtle issues arise in realizing this pattern. We identify that this problem is divided into two settings: (2, n) and (t, n) where t > 2. We develop novel techniques to address both settings as follows:•We show that the (2, n) setting permits a tight tρ for refresh. In particular we give a highly efficient tρ = 2 protocol to upgrade a number of standard (2, n) threshold signature schemes to proactive security with offline refresh. This protocol can augment existing implementations of threshold wallets for immediate use– we show that proactivization does not have to interfere with their native mode of operation. This technique is compatible with Schnorr, EdDSA, and with some effort even sophisticated ECDSA protocols. By implementation we show that proactivizing two different recent (2, n) ECDSA protocols incurs only 14% and 24% computational overhead respectively, less than 200 bytes, and no extra round of communication.•For the general (t, n) setting we prove that it is impossible to construct an offline refresh protocol with tρ < 2(t−1), i.e. tolerating a dishonest majority of online parties. Our techniques are novel in reasoning about the message complexity of proactive security, and may be of independent interest.Our results are positive for small-scale decentralization (such as 2FA with threshold wallets), and negative for large-scale distributed systems with higher thresholds. We thus initiate the study of proactive security with offline refresh, with a comprehensive treatment of the dishonest majority case.","PeriodicalId":6786,"journal":{"name":"2021 IEEE Symposium on Security and Privacy (SP)","volume":"66 1","pages":"608-625"},"PeriodicalIF":0.0,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86066946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-22DOI: 10.1109/SP40001.2021.00098
Sijun Tan, Brian Knott, Yuan Tian, David J. Wu
We introduce CryptGPU, a system for privacy-preserving machine learning that implements all operations on the GPU (graphics processing unit). Just as GPUs played a pivotal role in the success of modern deep learning, they are also essential for realizing scalable privacy-preserving deep learning. In this work, we start by introducing a new interface to losslessly embed cryptographic operations over secret-shared values (in a discrete domain) into floating-point operations that can be processed by highly-optimized CUDA kernels for linear algebra. We then identify a sequence of "GPU-friendly" cryptographic protocols to enable privacy-preserving evaluation of both linear and non-linear operations on the GPU. Our microbenchmarks indicate that our private GPU-based convolution protocol is over 150× faster than the analogous CPU-based protocol; for non-linear operations like the ReLU activation function, our GPU-based protocol is around 10× faster than its CPU analog. With CryptGPU, we support private inference and training on convolutional neural networks with over 60 million parameters as well as handle large datasets like ImageNet. Compared to the previous state-of-the-art, our protocols achieve a 2× to 8× improvement in private inference for large networks and datasets. For private training, we achieve a 6× to 36× improvement over prior state-of-the-art. Our work not only showcases the viability of performing secure multiparty computation (MPC) entirely on the GPU to newly enable fast privacy-preserving machine learning, but also highlights the importance of designing new MPC primitives that can take full advantage of the GPU’s computing capabilities.
{"title":"CryptGPU: Fast Privacy-Preserving Machine Learning on the GPU","authors":"Sijun Tan, Brian Knott, Yuan Tian, David J. Wu","doi":"10.1109/SP40001.2021.00098","DOIUrl":"https://doi.org/10.1109/SP40001.2021.00098","url":null,"abstract":"We introduce CryptGPU, a system for privacy-preserving machine learning that implements all operations on the GPU (graphics processing unit). Just as GPUs played a pivotal role in the success of modern deep learning, they are also essential for realizing scalable privacy-preserving deep learning. In this work, we start by introducing a new interface to losslessly embed cryptographic operations over secret-shared values (in a discrete domain) into floating-point operations that can be processed by highly-optimized CUDA kernels for linear algebra. We then identify a sequence of \"GPU-friendly\" cryptographic protocols to enable privacy-preserving evaluation of both linear and non-linear operations on the GPU. Our microbenchmarks indicate that our private GPU-based convolution protocol is over 150× faster than the analogous CPU-based protocol; for non-linear operations like the ReLU activation function, our GPU-based protocol is around 10× faster than its CPU analog. With CryptGPU, we support private inference and training on convolutional neural networks with over 60 million parameters as well as handle large datasets like ImageNet. Compared to the previous state-of-the-art, our protocols achieve a 2× to 8× improvement in private inference for large networks and datasets. For private training, we achieve a 6× to 36× improvement over prior state-of-the-art. Our work not only showcases the viability of performing secure multiparty computation (MPC) entirely on the GPU to newly enable fast privacy-preserving machine learning, but also highlights the importance of designing new MPC primitives that can take full advantage of the GPU’s computing capabilities.","PeriodicalId":6786,"journal":{"name":"2021 IEEE Symposium on Security and Privacy (SP)","volume":"50 1","pages":"1021-1038"},"PeriodicalIF":0.0,"publicationDate":"2021-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86148493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}