Pub Date : 2021-07-23DOI: 10.2478/popets-2021-0083
Ghada Almashaqbeh, Fabrice Benhamouda, Seungwook Han, Daniel Jaroslawicz, T. Malkin, Alex Nicita, T. Rabin, Abhishek Shah, Eran Tromer
Abstract Existing models for non-interactive MPC cannot provide full privacy for inputs, because they inherently leak the residual function (i.e., the output of the function on the honest parties’ input together with all possible values of the adversarial inputs). For example, in any non-interactive sealed-bid auction, the last bidder can figure out what was the highest previous bid. We present a new MPC model which avoids this privacy leak. To achieve this, we utilize a blockchain in a novel way, incorporating smart contracts and arbitrary parties that can be incentivized to perform computation (“bounty hunters,” akin to miners). Security is maintained under a monetary assumption about the parties: an honest party can temporarily supply a recoverable collateral of value higher than the computational cost an adversary can expend. We thus construct non-interactive MPC protocols with strong security guarantees (full security, no residual leakage) in the short term. Over time, as the adversary can invest more and more computational resources, the security guarantee decays. Thus, our model, which we call Gage MPC, is suitable for secure computation with limited-time secrecy, such as auctions. A key ingredient in our protocols is a primitive we call “Gage Time Capsules” (GaTC): a time capsule that allows a party to commit to a value that others are able to reveal but only at a designated computational cost. A GaTC allows a party to commit to a value together with a monetary collateral. If the original party properly opens the GaTC, it can recover the collateral. Otherwise, the collateral is used to incentivize bounty hunters to open the GaTC. This primitive is used to ensure completion of Gage MPC protocols on the desired inputs. As a requisite tool (of independent interest), we present a generalization of garbled circuit that are more robust: they can tolerate exposure of extra input labels. This is in contrast to Yao’s garbled circuits, whose secrecy breaks down if even a single extra label is exposed. Finally, we present a proof-of-concept implementation of a special case of our construction, yielding an auction functionality over an Ethereum-like blockchain.
现有的非交互式MPC模型不能为输入提供完全的隐私性,因为它们固有地泄露了残差函数(即诚实方输入的函数输出以及对抗方输入的所有可能值)。例如,在任何非交互式密封竞价拍卖中,最后的竞标者可以计算出之前的最高出价。我们提出了一种新的MPC模型来避免这种隐私泄露。为了实现这一目标,我们以一种新颖的方式利用区块链,结合智能合约和可以激励执行计算的任意方(“赏金猎人”,类似于矿工)。安全是在各方的货币假设下维持的:诚实的一方可以暂时提供价值高于对手可能花费的计算成本的可收回抵押品。因此,我们在短期内构建了具有强安全保证(完全安全,无残留泄漏)的非交互式MPC协议。随着时间的推移,当攻击者投入越来越多的计算资源时,安全保证就会衰减。因此,我们的模型(我们称之为Gage MPC)适用于具有有限时间保密性的安全计算,例如拍卖。我们协议中的一个关键成分是我们称为“Gage Time Capsules”(GaTC)的原语:一个时间胶囊,允许一方承诺其他人能够显示的值,但只能在指定的计算成本下进行。gtc允许一方与货币抵押品一起承诺价值。如果原告方正确打开海关关章,可以收回抵押品。否则,抵押品就会被用来激励赏金猎人打开GaTC。此原语用于确保在所需输入上完成Gage MPC协议。作为一种必要的工具(独立兴趣),我们提出了一种更鲁棒的乱码电路的泛化:它们可以容忍额外输入标签的暴露。这与姚的乱码电路形成鲜明对比,即使一个额外的标签被暴露,其保密性也会被破坏。最后,我们提出了我们构建的一个特殊案例的概念验证实现,在类似以太坊的区块链上产生拍卖功能。
{"title":"Gage MPC: Bypassing Residual Function Leakage for Non-Interactive MPC","authors":"Ghada Almashaqbeh, Fabrice Benhamouda, Seungwook Han, Daniel Jaroslawicz, T. Malkin, Alex Nicita, T. Rabin, Abhishek Shah, Eran Tromer","doi":"10.2478/popets-2021-0083","DOIUrl":"https://doi.org/10.2478/popets-2021-0083","url":null,"abstract":"Abstract Existing models for non-interactive MPC cannot provide full privacy for inputs, because they inherently leak the residual function (i.e., the output of the function on the honest parties’ input together with all possible values of the adversarial inputs). For example, in any non-interactive sealed-bid auction, the last bidder can figure out what was the highest previous bid. We present a new MPC model which avoids this privacy leak. To achieve this, we utilize a blockchain in a novel way, incorporating smart contracts and arbitrary parties that can be incentivized to perform computation (“bounty hunters,” akin to miners). Security is maintained under a monetary assumption about the parties: an honest party can temporarily supply a recoverable collateral of value higher than the computational cost an adversary can expend. We thus construct non-interactive MPC protocols with strong security guarantees (full security, no residual leakage) in the short term. Over time, as the adversary can invest more and more computational resources, the security guarantee decays. Thus, our model, which we call Gage MPC, is suitable for secure computation with limited-time secrecy, such as auctions. A key ingredient in our protocols is a primitive we call “Gage Time Capsules” (GaTC): a time capsule that allows a party to commit to a value that others are able to reveal but only at a designated computational cost. A GaTC allows a party to commit to a value together with a monetary collateral. If the original party properly opens the GaTC, it can recover the collateral. Otherwise, the collateral is used to incentivize bounty hunters to open the GaTC. This primitive is used to ensure completion of Gage MPC protocols on the desired inputs. As a requisite tool (of independent interest), we present a generalization of garbled circuit that are more robust: they can tolerate exposure of extra input labels. This is in contrast to Yao’s garbled circuits, whose secrecy breaks down if even a single extra label is exposed. Finally, we present a proof-of-concept implementation of a special case of our construction, yielding an auction functionality over an Ethereum-like blockchain.","PeriodicalId":74556,"journal":{"name":"Proceedings on Privacy Enhancing Technologies. Privacy Enhancing Technologies Symposium","volume":"2021 1","pages":"528 - 548"},"PeriodicalIF":0.0,"publicationDate":"2021-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49040223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-23DOI: 10.2478/popets-2021-0067
M. N. Al-Ameen, Huzeyfe Kocabas, Swapnil Nandy, Tanjina Tamanna
Abstract Although many technologies assume that a device or an account would be used by a single user, prior research has found that this assumption may not hold true in everyday life. Most studies conducted to date focused on sharing a device or account with the members in a household. However, there is a dearth in existing literature to understand the contexts of sharing devices and accounts, which may extend to a wide range of personal, social, and professional settings. Further, people’s sharing behavior could be impacted by their social background. To this end, our paper presents a qualitative study with 59 participants from three different countries: Bangladesh, Turkey, and USA, where we investigated the sharing of digital devices (e.g., computer, mobile phone) and online accounts, in particular, financial and identity accounts (e.g., email, social networking) in various contexts, and with different entities - not limited to the members in a household. Our study reveals users’ perceptions of risks while sharing a device or account, and their access control strategies to protect privacy and security. Based on our analysis, we shed light on the interplay between users’ sharing behavior and their demographics, social background, and cultural values. Taken together, our findings have broad implications that advance the PETS community’s situated understanding of sharing devices and accounts.
{"title":"“We, three brothers have always known everything of each other”: A Cross-cultural Study of Sharing Digital Devices and Online Accounts","authors":"M. N. Al-Ameen, Huzeyfe Kocabas, Swapnil Nandy, Tanjina Tamanna","doi":"10.2478/popets-2021-0067","DOIUrl":"https://doi.org/10.2478/popets-2021-0067","url":null,"abstract":"Abstract Although many technologies assume that a device or an account would be used by a single user, prior research has found that this assumption may not hold true in everyday life. Most studies conducted to date focused on sharing a device or account with the members in a household. However, there is a dearth in existing literature to understand the contexts of sharing devices and accounts, which may extend to a wide range of personal, social, and professional settings. Further, people’s sharing behavior could be impacted by their social background. To this end, our paper presents a qualitative study with 59 participants from three different countries: Bangladesh, Turkey, and USA, where we investigated the sharing of digital devices (e.g., computer, mobile phone) and online accounts, in particular, financial and identity accounts (e.g., email, social networking) in various contexts, and with different entities - not limited to the members in a household. Our study reveals users’ perceptions of risks while sharing a device or account, and their access control strategies to protect privacy and security. Based on our analysis, we shed light on the interplay between users’ sharing behavior and their demographics, social background, and cultural values. Taken together, our findings have broad implications that advance the PETS community’s situated understanding of sharing devices and accounts.","PeriodicalId":74556,"journal":{"name":"Proceedings on Privacy Enhancing Technologies. Privacy Enhancing Technologies Symposium","volume":"2021 1","pages":"203 - 224"},"PeriodicalIF":0.0,"publicationDate":"2021-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46024624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-23DOI: 10.2478/popets-2021-0070
Felix Engelmann, Lukas Müller, Andreas Peter, F. Kargl, Christoph Bösch
Abstract Decentralized token exchanges allow for secure trading of tokens without a trusted third party. However, decentralization is mostly achieved at the expense of transaction privacy. For a fair exchange, transactions must remain private to hide the participants and volumes while maintaining the possibility for noninteractive execution of trades. In this paper we present a swap confidential transaction system (SwapCT) which is related to ring confidential transactions (e.g. used in Monero) but supports multiple token types to trade among and enables secure, partial transactions for noninteractive swaps. We prove that SwapCT is secure in a strict, formal model and present its efficient performance in a prototype implementation with logarithmic signature sizes for large anonymity sets. For our construction we design an aggregatable signature scheme which might be of independent interest. Our SwapCT system thereby enables a secure and private exchange for tokens without a trusted third party.
{"title":"SwapCT: Swap Confidential Transactions for Privacy-Preserving Multi-Token Exchanges","authors":"Felix Engelmann, Lukas Müller, Andreas Peter, F. Kargl, Christoph Bösch","doi":"10.2478/popets-2021-0070","DOIUrl":"https://doi.org/10.2478/popets-2021-0070","url":null,"abstract":"Abstract Decentralized token exchanges allow for secure trading of tokens without a trusted third party. However, decentralization is mostly achieved at the expense of transaction privacy. For a fair exchange, transactions must remain private to hide the participants and volumes while maintaining the possibility for noninteractive execution of trades. In this paper we present a swap confidential transaction system (SwapCT) which is related to ring confidential transactions (e.g. used in Monero) but supports multiple token types to trade among and enables secure, partial transactions for noninteractive swaps. We prove that SwapCT is secure in a strict, formal model and present its efficient performance in a prototype implementation with logarithmic signature sizes for large anonymity sets. For our construction we design an aggregatable signature scheme which might be of independent interest. Our SwapCT system thereby enables a secure and private exchange for tokens without a trusted third party.","PeriodicalId":74556,"journal":{"name":"Proceedings on Privacy Enhancing Technologies. Privacy Enhancing Technologies Symposium","volume":"2021 1","pages":"270 - 290"},"PeriodicalIF":0.0,"publicationDate":"2021-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46784703","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-23DOI: 10.2478/popets-2021-0061
K. Chalkias, Shir Cohen, Kevin Lewi, Fredric Moezinia, Yolan Romailler
Abstract This paper presents HashWires, a hash-based range proof protocol that is applicable in settings for which there is a trusted third party (typically a credential issuer) that can generate commitments. We refer to these as “credential-based” range proofs (CBRPs). HashWires improves upon hashchain solutions that are typically restricted to micro-payments for small interval ranges, achieving an exponential speedup in proof generation and verification time. Under reasonable assumptions and performance considerations, a Hash-Wires proof can be as small as 305 bytes for 64-bit integers. Although CBRPs are not zero-knowledge and are inherently less flexible than general zero-knowledge range proofs, we provide a number of applications in which a credential issuer can leverage HashWires to provide range proofs for private values, without having to rely on heavyweight cryptographic tools and assumptions.
{"title":"HashWires: Hyperefficient Credential-Based Range Proofs","authors":"K. Chalkias, Shir Cohen, Kevin Lewi, Fredric Moezinia, Yolan Romailler","doi":"10.2478/popets-2021-0061","DOIUrl":"https://doi.org/10.2478/popets-2021-0061","url":null,"abstract":"Abstract This paper presents HashWires, a hash-based range proof protocol that is applicable in settings for which there is a trusted third party (typically a credential issuer) that can generate commitments. We refer to these as “credential-based” range proofs (CBRPs). HashWires improves upon hashchain solutions that are typically restricted to micro-payments for small interval ranges, achieving an exponential speedup in proof generation and verification time. Under reasonable assumptions and performance considerations, a Hash-Wires proof can be as small as 305 bytes for 64-bit integers. Although CBRPs are not zero-knowledge and are inherently less flexible than general zero-knowledge range proofs, we provide a number of applications in which a credential issuer can leverage HashWires to provide range proofs for private values, without having to rely on heavyweight cryptographic tools and assumptions.","PeriodicalId":74556,"journal":{"name":"Proceedings on Privacy Enhancing Technologies. Privacy Enhancing Technologies Symposium","volume":"2021 1","pages":"76 - 95"},"PeriodicalIF":0.0,"publicationDate":"2021-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48951511","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-23DOI: 10.2478/popets-2021-0080
Edwin Dauber, R. Erbacher, Gregory G. Shearer, Mike Weisman, Frederica Free-Nelson, R. Greenstadt
Abstract Source code authorship attribution can be used for many types of intelligence on binaries and executables, including forensics, but introduces a threat to the privacy of anonymous programmers. Previous work has shown how to attribute individually authored code files and code segments. In this work, we examine authorship segmentation, in which we determine authorship of arbitrary parts of a program. While previous work has performed segmentation at the textual level, we attempt to attribute subtrees of the abstract syntax tree (AST). We focus on two primary problems: identifying the primary author of an arbitrary AST subtree and identifying on which edges of the AST primary authorship changes. We demonstrate that the former is a difficult problem but the later is much easier. We also demonstrate methods by which we can leverage the easier problem to improve accuracy for the harder problem. We show that while identifying the author of subtrees is difficult overall, this is primarily due to the abundance of small subtrees: in the validation set we can attribute subtrees of at least 25 nodes with accuracy over 80% and at least 33 nodes with accuracy over 90%, while in the test set we can attribute subtrees of at least 33 nodes with accuracy of 70%. While our baseline accuracy for single AST nodes is 20.21% for the validation set and 35.66% for the test set, we present techniques by which we can increase this accuracy to 42.01% and 49.21% respectively. We further present observations about collaborative code found on GitHub that may drive further research.
{"title":"Supervised Authorship Segmentation of Open Source Code Projects","authors":"Edwin Dauber, R. Erbacher, Gregory G. Shearer, Mike Weisman, Frederica Free-Nelson, R. Greenstadt","doi":"10.2478/popets-2021-0080","DOIUrl":"https://doi.org/10.2478/popets-2021-0080","url":null,"abstract":"Abstract Source code authorship attribution can be used for many types of intelligence on binaries and executables, including forensics, but introduces a threat to the privacy of anonymous programmers. Previous work has shown how to attribute individually authored code files and code segments. In this work, we examine authorship segmentation, in which we determine authorship of arbitrary parts of a program. While previous work has performed segmentation at the textual level, we attempt to attribute subtrees of the abstract syntax tree (AST). We focus on two primary problems: identifying the primary author of an arbitrary AST subtree and identifying on which edges of the AST primary authorship changes. We demonstrate that the former is a difficult problem but the later is much easier. We also demonstrate methods by which we can leverage the easier problem to improve accuracy for the harder problem. We show that while identifying the author of subtrees is difficult overall, this is primarily due to the abundance of small subtrees: in the validation set we can attribute subtrees of at least 25 nodes with accuracy over 80% and at least 33 nodes with accuracy over 90%, while in the test set we can attribute subtrees of at least 33 nodes with accuracy of 70%. While our baseline accuracy for single AST nodes is 20.21% for the validation set and 35.66% for the test set, we present techniques by which we can increase this accuracy to 42.01% and 49.21% respectively. We further present observations about collaborative code found on GitHub that may drive further research.","PeriodicalId":74556,"journal":{"name":"Proceedings on Privacy Enhancing Technologies. Privacy Enhancing Technologies Symposium","volume":"2021 1","pages":"464 - 479"},"PeriodicalIF":0.0,"publicationDate":"2021-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48050877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-23DOI: 10.2478/popets-2021-0064
José Cabrero-Holgueras, S. Pastrana
Abstract Deep Learning (DL) is a powerful solution for complex problems in many disciplines such as finance, medical research, or social sciences. Due to the high computational cost of DL algorithms, data scientists often rely upon Machine Learning as a Service (MLaaS) to outsource the computation onto third-party servers. However, outsourcing the computation raises privacy concerns when dealing with sensitive information, e.g., health or financial records. Also, privacy regulations like the European GDPR limit the collection, distribution, and use of such sensitive data. Recent advances in privacy-preserving computation techniques (i.e., Homomorphic Encryption and Secure Multiparty Computation) have enabled DL training and inference over protected data. However, these techniques are still immature and difficult to deploy in practical scenarios. In this work, we review the evolution of the adaptation of privacy-preserving computation techniques onto DL, to understand the gap between research proposals and practical applications. We highlight the relative advantages and disadvantages, considering aspects such as efficiency shortcomings, reproducibility issues due to the lack of standard tools and programming interfaces, or lack of integration with DL frameworks commonly used by the data science community.
{"title":"SoK: Privacy-Preserving Computation Techniques for Deep Learning","authors":"José Cabrero-Holgueras, S. Pastrana","doi":"10.2478/popets-2021-0064","DOIUrl":"https://doi.org/10.2478/popets-2021-0064","url":null,"abstract":"Abstract Deep Learning (DL) is a powerful solution for complex problems in many disciplines such as finance, medical research, or social sciences. Due to the high computational cost of DL algorithms, data scientists often rely upon Machine Learning as a Service (MLaaS) to outsource the computation onto third-party servers. However, outsourcing the computation raises privacy concerns when dealing with sensitive information, e.g., health or financial records. Also, privacy regulations like the European GDPR limit the collection, distribution, and use of such sensitive data. Recent advances in privacy-preserving computation techniques (i.e., Homomorphic Encryption and Secure Multiparty Computation) have enabled DL training and inference over protected data. However, these techniques are still immature and difficult to deploy in practical scenarios. In this work, we review the evolution of the adaptation of privacy-preserving computation techniques onto DL, to understand the gap between research proposals and practical applications. We highlight the relative advantages and disadvantages, considering aspects such as efficiency shortcomings, reproducibility issues due to the lack of standard tools and programming interfaces, or lack of integration with DL frameworks commonly used by the data science community.","PeriodicalId":74556,"journal":{"name":"Proceedings on Privacy Enhancing Technologies. Privacy Enhancing Technologies Symposium","volume":"2021 1","pages":"139 - 162"},"PeriodicalIF":0.0,"publicationDate":"2021-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49380395","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-23DOI: 10.2478/popets-2021-0084
A. Boldyreva, Tianxin Tang
Abstract We study the problem of privacy-preserving approximate kNN search in an outsourced environment — the client sends the encrypted data to an untrusted server and later can perform secure approximate kNN search and updates. We design a security model and propose a generic construction based on locality-sensitive hashing, symmetric encryption, and an oblivious map. The construction provides very strong security guarantees, not only hiding the information about the data, but also the access, query, and volume patterns. We implement, evaluate efficiency, and compare the performance of two concrete schemes based on an oblivious AVL tree and an oblivious BSkiplist.
{"title":"Privacy-Preserving Approximate k-Nearest-Neighbors Search that Hides Access, Query and Volume Patterns","authors":"A. Boldyreva, Tianxin Tang","doi":"10.2478/popets-2021-0084","DOIUrl":"https://doi.org/10.2478/popets-2021-0084","url":null,"abstract":"Abstract We study the problem of privacy-preserving approximate kNN search in an outsourced environment — the client sends the encrypted data to an untrusted server and later can perform secure approximate kNN search and updates. We design a security model and propose a generic construction based on locality-sensitive hashing, symmetric encryption, and an oblivious map. The construction provides very strong security guarantees, not only hiding the information about the data, but also the access, query, and volume patterns. We implement, evaluate efficiency, and compare the performance of two concrete schemes based on an oblivious AVL tree and an oblivious BSkiplist.","PeriodicalId":74556,"journal":{"name":"Proceedings on Privacy Enhancing Technologies. Privacy Enhancing Technologies Symposium","volume":"2021 1","pages":"549 - 574"},"PeriodicalIF":0.0,"publicationDate":"2021-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43435110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-23DOI: 10.2478/popets-2021-0074
W. Lueks, Seda F. Gürses, Michael Veale, Edouard Bugnion, M. Salathé, K. Paterson, C. Troncoso
Abstract There is growing evidence that SARS-CoV-2 can be transmitted beyond close proximity contacts, in particular in closed and crowded environments with insufficient ventilation. To help mitigation efforts, contact tracers need a way to notify those who were present in such environments at the same time as infected individuals. Neither traditional human-based contact tracing powered by handwritten or electronic lists, nor Bluetooth-enabled proximity tracing can handle this problem efficiently. In this paper, we propose CrowdNotifier, a protocol that can complement manual contact tracing by efficiently notifying visitors of venues and events with SARS-CoV-2-positive attendees. We prove that CrowdNotifier provides strong privacy and abuse-resistance, and show that it can scale to handle notification at a national scale.
{"title":"CrowdNotifier: Decentralized Privacy-Preserving Presence Tracing","authors":"W. Lueks, Seda F. Gürses, Michael Veale, Edouard Bugnion, M. Salathé, K. Paterson, C. Troncoso","doi":"10.2478/popets-2021-0074","DOIUrl":"https://doi.org/10.2478/popets-2021-0074","url":null,"abstract":"Abstract There is growing evidence that SARS-CoV-2 can be transmitted beyond close proximity contacts, in particular in closed and crowded environments with insufficient ventilation. To help mitigation efforts, contact tracers need a way to notify those who were present in such environments at the same time as infected individuals. Neither traditional human-based contact tracing powered by handwritten or electronic lists, nor Bluetooth-enabled proximity tracing can handle this problem efficiently. In this paper, we propose CrowdNotifier, a protocol that can complement manual contact tracing by efficiently notifying visitors of venues and events with SARS-CoV-2-positive attendees. We prove that CrowdNotifier provides strong privacy and abuse-resistance, and show that it can scale to handle notification at a national scale.","PeriodicalId":74556,"journal":{"name":"Proceedings on Privacy Enhancing Technologies. Privacy Enhancing Technologies Symposium","volume":"2021 1","pages":"350 - 368"},"PeriodicalIF":0.0,"publicationDate":"2021-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48688933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-23DOI: 10.2478/popets-2021-0059
Adithya Vadapalli, Fattaneh Bayatbabolghani, Ryan Henry
Abstract We describe the design, analysis, implementation, and evaluation of Pirsona, a digital content delivery system that realizes collaborative-filtering recommendations atop private information retrieval (PIR). This combination of seemingly antithetical primitives makes possible—for the first time—the construction of practically efficient e-commerce and digital media delivery systems that can provide personalized content recommendations based on their users’ historical consumption patterns while simultaneously keeping said consumption patterns private. In designing Pirsona, we have opted for the most performant primitives available (at the expense of rather strong non-collusion assumptions); namely, we use the recent computationally 1-private PIR protocol of Hafiz and Henry (PETS 2019.4) together with a carefully optimized 4PC Boolean matrix factorization.
{"title":"You May Also Like... Privacy: Recommendation Systems Meet PIR","authors":"Adithya Vadapalli, Fattaneh Bayatbabolghani, Ryan Henry","doi":"10.2478/popets-2021-0059","DOIUrl":"https://doi.org/10.2478/popets-2021-0059","url":null,"abstract":"Abstract We describe the design, analysis, implementation, and evaluation of Pirsona, a digital content delivery system that realizes collaborative-filtering recommendations atop private information retrieval (PIR). This combination of seemingly antithetical primitives makes possible—for the first time—the construction of practically efficient e-commerce and digital media delivery systems that can provide personalized content recommendations based on their users’ historical consumption patterns while simultaneously keeping said consumption patterns private. In designing Pirsona, we have opted for the most performant primitives available (at the expense of rather strong non-collusion assumptions); namely, we use the recent computationally 1-private PIR protocol of Hafiz and Henry (PETS 2019.4) together with a carefully optimized 4PC Boolean matrix factorization.","PeriodicalId":74556,"journal":{"name":"Proceedings on Privacy Enhancing Technologies. Privacy Enhancing Technologies Symposium","volume":"2021 1","pages":"30 - 53"},"PeriodicalIF":0.0,"publicationDate":"2021-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48586178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-23DOI: 10.2478/popets-2021-0082
Daniel Smullen, Yaxing Yao, Yuanyuan Feng, N. Sadeh, Arthur Edelstein, R. Weiss
Abstract Browser users encounter a broad array of potentially intrusive practices: from behavioral profiling, to crypto-mining, fingerprinting, and more. We study people’s perception, awareness, understanding, and preferences to opt out of those practices. We conducted a mixed-methods study that included qualitative (n=186) and quantitative (n=888) surveys covering 8 neutrally presented practices, equally highlighting both their benefits and risks. Consistent with prior research focusing on specific practices and mitigation techniques, we observe that most people are unaware of how to effectively identify or control the practices we surveyed. However, our user-centered approach reveals diverse views about the perceived risks and benefits, and that the majority of our participants wished to both restrict and be explicitly notified about the surveyed practices. Though prior research shows that meaningful controls are rarely available, we found that many participants mistakenly assume opt-out settings are common but just too difficult to find. However, even if they were hypothetically available on every website, our findings suggest that settings which allow practices by default are more burdensome to users than alternatives which are contextualized to website categories instead. Our results argue for settings which can distinguish among website categories where certain practices are seen as permissible, proactively notify users about their presence, and otherwise deny intrusive practices by default. Standardizing these settings in the browser rather than being left to individual websites would have the advantage of providing a uniform interface to support notification, control, and could help mitigate dark patterns. We also discuss the regulatory implications of the findings.
{"title":"Managing Potentially Intrusive Practices in the Browser: A User-Centered Perspective","authors":"Daniel Smullen, Yaxing Yao, Yuanyuan Feng, N. Sadeh, Arthur Edelstein, R. Weiss","doi":"10.2478/popets-2021-0082","DOIUrl":"https://doi.org/10.2478/popets-2021-0082","url":null,"abstract":"Abstract Browser users encounter a broad array of potentially intrusive practices: from behavioral profiling, to crypto-mining, fingerprinting, and more. We study people’s perception, awareness, understanding, and preferences to opt out of those practices. We conducted a mixed-methods study that included qualitative (n=186) and quantitative (n=888) surveys covering 8 neutrally presented practices, equally highlighting both their benefits and risks. Consistent with prior research focusing on specific practices and mitigation techniques, we observe that most people are unaware of how to effectively identify or control the practices we surveyed. However, our user-centered approach reveals diverse views about the perceived risks and benefits, and that the majority of our participants wished to both restrict and be explicitly notified about the surveyed practices. Though prior research shows that meaningful controls are rarely available, we found that many participants mistakenly assume opt-out settings are common but just too difficult to find. However, even if they were hypothetically available on every website, our findings suggest that settings which allow practices by default are more burdensome to users than alternatives which are contextualized to website categories instead. Our results argue for settings which can distinguish among website categories where certain practices are seen as permissible, proactively notify users about their presence, and otherwise deny intrusive practices by default. Standardizing these settings in the browser rather than being left to individual websites would have the advantage of providing a uniform interface to support notification, control, and could help mitigate dark patterns. We also discuss the regulatory implications of the findings.","PeriodicalId":74556,"journal":{"name":"Proceedings on Privacy Enhancing Technologies. Privacy Enhancing Technologies Symposium","volume":"2021 1","pages":"500 - 527"},"PeriodicalIF":0.0,"publicationDate":"2021-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43182386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}