Pub Date : 2021-06-22DOI: 10.2478/popets-2022-0016
Marvin Kowalewski, Franziska Herbert, Theodor Schnitzler, Markus Dürmuth
Abstract Digital tools play an important role in fighting the current global COVID-19 pandemic. We conducted a representative online study in Germany on a sample of 599 participants to evaluate the user perception of vaccination certificates. We investigated five different variants of vaccination certificates based on deployed and planned designs in a between-group design, including paper-based and app-based variants. Our main results show that the willingness to use and adopt vaccination certificates is generally high. Overall, paper-based vaccination certificates were favored over app-based solutions. The willingness to use digital apps decreased significantly by a higher disposition to privacy and increased by higher worries about the pandemic and acceptance of the coronavirus vaccination. Vaccination certificates resemble an interesting use case for studying privacy perceptions for health-related data. We hope that our work will educate the currently ongoing design of vaccination certificates, give us deeper insights into the privacy of health-related data and apps, and prepare us for future potential applications of vaccination certificates and health apps in general.
{"title":"Proof-of-Vax: Studying User Preferences and Perception of Covid Vaccination Certificates","authors":"Marvin Kowalewski, Franziska Herbert, Theodor Schnitzler, Markus Dürmuth","doi":"10.2478/popets-2022-0016","DOIUrl":"https://doi.org/10.2478/popets-2022-0016","url":null,"abstract":"Abstract Digital tools play an important role in fighting the current global COVID-19 pandemic. We conducted a representative online study in Germany on a sample of 599 participants to evaluate the user perception of vaccination certificates. We investigated five different variants of vaccination certificates based on deployed and planned designs in a between-group design, including paper-based and app-based variants. Our main results show that the willingness to use and adopt vaccination certificates is generally high. Overall, paper-based vaccination certificates were favored over app-based solutions. The willingness to use digital apps decreased significantly by a higher disposition to privacy and increased by higher worries about the pandemic and acceptance of the coronavirus vaccination. Vaccination certificates resemble an interesting use case for studying privacy perceptions for health-related data. We hope that our work will educate the currently ongoing design of vaccination certificates, give us deeper insights into the privacy of health-related data and apps, and prepare us for future potential applications of vaccination certificates and health apps in general.","PeriodicalId":74556,"journal":{"name":"Proceedings on Privacy Enhancing Technologies. Privacy Enhancing Technologies Symposium","volume":"2022 1","pages":"317 - 338"},"PeriodicalIF":0.0,"publicationDate":"2021-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48508195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-06-05DOI: 10.2478/popets-2022-0042
Samuel Adams, Chaitali Choudhary, Martine De Cock, Rafael Dowsley, David Melanson, Anderson C. A. Nascimento, Davis Railsback, Jianwei Shen
Abstract Most existing Secure Multi-Party Computation (MPC) protocols for privacy-preserving training of decision trees over distributed data assume that the features are categorical. In real-life applications, features are often numerical. The standard “in the clear” algorithm to grow decision trees on data with continuous values requires sorting of training examples for each feature in the quest for an optimal cut-point in the range of feature values in each node. Sorting is an expensive operation in MPC, hence finding secure protocols that avoid such an expensive step is a relevant problem in privacy-preserving machine learning. In this paper we propose three more efficient alternatives for secure training of decision tree based models on data with continuous features, namely: (1) secure discretization of the data, followed by secure training of a decision tree over the discretized data; (2) secure discretization of the data, followed by secure training of a random forest over the discretized data; and (3) secure training of extremely randomized trees (“extra-trees”) on the original data. Approaches (2) and (3) both involve randomizing feature choices. In addition, in approach (3) cut-points are chosen randomly as well, thereby alleviating the need to sort or to discretize the data up front. We implemented all proposed solutions in the semi-honest setting with additive secret sharing based MPC. In addition to mathematically proving that all proposed approaches are correct and secure, we experimentally evaluated and compared them in terms of classification accuracy and runtime. We privately train tree ensembles over data sets with thousands of instances or features in a few minutes, with accuracies that are at par with those obtained in the clear. This makes our solution more efficient than the existing approaches, which are based on oblivious sorting.
{"title":"Privacy-preserving training of tree ensembles over continuous data","authors":"Samuel Adams, Chaitali Choudhary, Martine De Cock, Rafael Dowsley, David Melanson, Anderson C. A. Nascimento, Davis Railsback, Jianwei Shen","doi":"10.2478/popets-2022-0042","DOIUrl":"https://doi.org/10.2478/popets-2022-0042","url":null,"abstract":"Abstract Most existing Secure Multi-Party Computation (MPC) protocols for privacy-preserving training of decision trees over distributed data assume that the features are categorical. In real-life applications, features are often numerical. The standard “in the clear” algorithm to grow decision trees on data with continuous values requires sorting of training examples for each feature in the quest for an optimal cut-point in the range of feature values in each node. Sorting is an expensive operation in MPC, hence finding secure protocols that avoid such an expensive step is a relevant problem in privacy-preserving machine learning. In this paper we propose three more efficient alternatives for secure training of decision tree based models on data with continuous features, namely: (1) secure discretization of the data, followed by secure training of a decision tree over the discretized data; (2) secure discretization of the data, followed by secure training of a random forest over the discretized data; and (3) secure training of extremely randomized trees (“extra-trees”) on the original data. Approaches (2) and (3) both involve randomizing feature choices. In addition, in approach (3) cut-points are chosen randomly as well, thereby alleviating the need to sort or to discretize the data up front. We implemented all proposed solutions in the semi-honest setting with additive secret sharing based MPC. In addition to mathematically proving that all proposed approaches are correct and secure, we experimentally evaluated and compared them in terms of classification accuracy and runtime. We privately train tree ensembles over data sets with thousands of instances or features in a few minutes, with accuracies that are at par with those obtained in the clear. This makes our solution more efficient than the existing approaches, which are based on oblivious sorting.","PeriodicalId":74556,"journal":{"name":"Proceedings on Privacy Enhancing Technologies. Privacy Enhancing Technologies Symposium","volume":"2022 1","pages":"205 - 226"},"PeriodicalIF":0.0,"publicationDate":"2021-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44040088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-06-04DOI: 10.2478/popets-2021-0069
M. Hils, Daniel W. Woods, Rainer Böhme
Abstract Privacy preference signals are digital representations of how users want their personal data to be processed. Such signals must be adopted by both the sender (users) and intended recipients (data processors). Adoption represents a coordination problem that remains unsolved despite efforts dating back to the 1990s. Browsers implemented standards like the Platform for Privacy Preferences (P3P) and Do Not Track (DNT), but vendors profiting from personal data faced few incentives to receive and respect the expressed wishes of data subjects. In the wake of recent privacy laws, a coalition of AdTech firms published the Transparency and Consent Framework (TCF), which defines an optin consent signal. This paper integrates post-GDPR developments into the wider history of privacy preference signals. Our main contribution is a high-frequency longitudinal study describing how TCF signal gained dominance as of February 2021. We explore which factors correlate with adoption at the website level. Both the number of third parties on a website and the presence of Google Ads are associated with higher adoption of TCF. Further, we show that vendors acted as early adopters of TCF 2.0 and provide two case-studies describing how Consent Management Providers shifted existing customers to TCF 2.0. We sketch ways forward for a pro-privacy signal.
{"title":"Privacy Preference Signals: Past, Present and Future","authors":"M. Hils, Daniel W. Woods, Rainer Böhme","doi":"10.2478/popets-2021-0069","DOIUrl":"https://doi.org/10.2478/popets-2021-0069","url":null,"abstract":"Abstract Privacy preference signals are digital representations of how users want their personal data to be processed. Such signals must be adopted by both the sender (users) and intended recipients (data processors). Adoption represents a coordination problem that remains unsolved despite efforts dating back to the 1990s. Browsers implemented standards like the Platform for Privacy Preferences (P3P) and Do Not Track (DNT), but vendors profiting from personal data faced few incentives to receive and respect the expressed wishes of data subjects. In the wake of recent privacy laws, a coalition of AdTech firms published the Transparency and Consent Framework (TCF), which defines an optin consent signal. This paper integrates post-GDPR developments into the wider history of privacy preference signals. Our main contribution is a high-frequency longitudinal study describing how TCF signal gained dominance as of February 2021. We explore which factors correlate with adoption at the website level. Both the number of third parties on a website and the presence of Google Ads are associated with higher adoption of TCF. Further, we show that vendors acted as early adopters of TCF 2.0 and provide two case-studies describing how Consent Management Providers shifted existing customers to TCF 2.0. We sketch ways forward for a pro-privacy signal.","PeriodicalId":74556,"journal":{"name":"Proceedings on Privacy Enhancing Technologies. Privacy Enhancing Technologies Symposium","volume":"2021 1","pages":"249 - 269"},"PeriodicalIF":0.0,"publicationDate":"2021-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48772269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-05-19DOI: 10.2478/popets-2022-0043
A. Wainakh, Fabrizio G. Ventola, Till Müßig, Jens Keim, Carlos Garcia Cordero, Ephraim Zimmer, Tim Grube, K. Kersting, M. Mühlhäuser
Abstract Federated learning enables multiple users to build a joint model by sharing their model updates (gradients), while their raw data remains local on their devices. In contrast to the common belief that this provides privacy benefits, we here add to the very recent results on privacy risks when sharing gradients. Specifically, we investigate Label Leakage from Gradients (LLG), a novel attack to extract the labels of the users’ training data from their shared gradients. The attack exploits the direction and magnitude of gradients to determine the presence or absence of any label. LLG is simple yet effective, capable of leaking potential sensitive information represented by labels, and scales well to arbitrary batch sizes and multiple classes. We mathematically and empirically demonstrate the validity of the attack under different settings. Moreover, empirical results show that LLG successfully extracts labels with high accuracy at the early stages of model training. We also discuss different defense mechanisms against such leakage. Our findings suggest that gradient compression is a practical technique to mitigate the attack.
{"title":"User-Level Label Leakage from Gradients in Federated Learning","authors":"A. Wainakh, Fabrizio G. Ventola, Till Müßig, Jens Keim, Carlos Garcia Cordero, Ephraim Zimmer, Tim Grube, K. Kersting, M. Mühlhäuser","doi":"10.2478/popets-2022-0043","DOIUrl":"https://doi.org/10.2478/popets-2022-0043","url":null,"abstract":"Abstract Federated learning enables multiple users to build a joint model by sharing their model updates (gradients), while their raw data remains local on their devices. In contrast to the common belief that this provides privacy benefits, we here add to the very recent results on privacy risks when sharing gradients. Specifically, we investigate Label Leakage from Gradients (LLG), a novel attack to extract the labels of the users’ training data from their shared gradients. The attack exploits the direction and magnitude of gradients to determine the presence or absence of any label. LLG is simple yet effective, capable of leaking potential sensitive information represented by labels, and scales well to arbitrary batch sizes and multiple classes. We mathematically and empirically demonstrate the validity of the attack under different settings. Moreover, empirical results show that LLG successfully extracts labels with high accuracy at the early stages of model training. We also discuss different defense mechanisms against such leakage. Our findings suggest that gradient compression is a practical technique to mitigate the attack.","PeriodicalId":74556,"journal":{"name":"Proceedings on Privacy Enhancing Technologies. Privacy Enhancing Technologies Symposium","volume":"2022 1","pages":"227 - 244"},"PeriodicalIF":0.0,"publicationDate":"2021-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41858773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-05-11DOI: 10.2478/popets-2021-0075
A. Mandalari, Daniel J. Dubois, Roman Kolcun, Muhammad Talha Paracha, H. Haddadi, D. Choffnes
Abstract Despite the prevalence of Internet of Things (IoT) devices, there is little information about the purpose and risks of the Internet traffic these devices generate, and consumers have limited options for controlling those risks. A key open question is whether one can mitigate these risks by automatically blocking some of the Internet connections from IoT devices, without rendering the devices inoperable. In this paper, we address this question by developing a rigorous methodology that relies on automated IoT-device experimentation to reveal which network connections (and the information they expose) are essential, and which are not. We further develop strategies to automatically classify network traffic destinations as either required (i.e., their traffic is essential for devices to work properly) or not, hence allowing firewall rules to block traffic sent to non-required destinations without breaking the functionality of the device. We find that indeed 16 among the 31 devices we tested have at least one blockable non-required destination, with the maximum number of blockable destinations for a device being 11. We further analyze the destination of network traffic and find that all third parties observed in our experiments are blockable, while first and support parties are neither uniformly required or non-required. Finally, we demonstrate the limitations of existing blocklists on IoT traffic, propose a set of guidelines for automatically limiting non-essential IoT traffic, and we develop a prototype system that implements these guidelines.
{"title":"Blocking Without Breaking: Identification and Mitigation of Non-Essential IoT Traffic","authors":"A. Mandalari, Daniel J. Dubois, Roman Kolcun, Muhammad Talha Paracha, H. Haddadi, D. Choffnes","doi":"10.2478/popets-2021-0075","DOIUrl":"https://doi.org/10.2478/popets-2021-0075","url":null,"abstract":"Abstract Despite the prevalence of Internet of Things (IoT) devices, there is little information about the purpose and risks of the Internet traffic these devices generate, and consumers have limited options for controlling those risks. A key open question is whether one can mitigate these risks by automatically blocking some of the Internet connections from IoT devices, without rendering the devices inoperable. In this paper, we address this question by developing a rigorous methodology that relies on automated IoT-device experimentation to reveal which network connections (and the information they expose) are essential, and which are not. We further develop strategies to automatically classify network traffic destinations as either required (i.e., their traffic is essential for devices to work properly) or not, hence allowing firewall rules to block traffic sent to non-required destinations without breaking the functionality of the device. We find that indeed 16 among the 31 devices we tested have at least one blockable non-required destination, with the maximum number of blockable destinations for a device being 11. We further analyze the destination of network traffic and find that all third parties observed in our experiments are blockable, while first and support parties are neither uniformly required or non-required. Finally, we demonstrate the limitations of existing blocklists on IoT traffic, propose a set of guidelines for automatically limiting non-essential IoT traffic, and we develop a prototype system that implements these guidelines.","PeriodicalId":74556,"journal":{"name":"Proceedings on Privacy Enhancing Technologies. Privacy Enhancing Technologies Symposium","volume":"2021 1","pages":"369 - 388"},"PeriodicalIF":0.0,"publicationDate":"2021-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41389878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-27DOI: 10.2478/popets-2021-0039
Aditya Damodaran, A. Rial
Abstract Loyalty programs allow vendors to profile buyers based on their purchase histories, which can reveal privacy sensitive information. Existing privacy-friendly loyalty programs force buyers to choose whether their purchases are linkable. Moreover, vendors receive more purchase data than required for the sake of profiling. We propose a privacy-preserving loyalty program where purchases are always unlinkable, yet a vendor can profile a buyer based on her purchase history, which remains hidden from the vendor. Our protocol is based on a new building block, an unlinkable updatable hiding database (HD), which we define and construct. HD allows the vendor to initialize and update databases stored by buyers that contain their purchase histories and their accumulated loyalty points. Updates are unlinkable and, at each update, the database is hidden from the vendor. Buyers can neither modify the database nor use old versions of it. Our construction for HD is practical for large databases.
{"title":"Unlinkable Updatable Hiding Databases and Privacy-Preserving Loyalty Programs","authors":"Aditya Damodaran, A. Rial","doi":"10.2478/popets-2021-0039","DOIUrl":"https://doi.org/10.2478/popets-2021-0039","url":null,"abstract":"Abstract Loyalty programs allow vendors to profile buyers based on their purchase histories, which can reveal privacy sensitive information. Existing privacy-friendly loyalty programs force buyers to choose whether their purchases are linkable. Moreover, vendors receive more purchase data than required for the sake of profiling. We propose a privacy-preserving loyalty program where purchases are always unlinkable, yet a vendor can profile a buyer based on her purchase history, which remains hidden from the vendor. Our protocol is based on a new building block, an unlinkable updatable hiding database (HD), which we define and construct. HD allows the vendor to initialize and update databases stored by buyers that contain their purchase histories and their accumulated loyalty points. Updates are unlinkable and, at each update, the database is hidden from the vendor. Buyers can neither modify the database nor use old versions of it. Our construction for HD is practical for large databases.","PeriodicalId":74556,"journal":{"name":"Proceedings on Privacy Enhancing Technologies. Privacy Enhancing Technologies Symposium","volume":"2021 1","pages":"95 - 121"},"PeriodicalIF":0.0,"publicationDate":"2021-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41553314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-27DOI: 10.2478/popets-2021-0056
Nathan Reitinger, Michelle L. Mazurek
Abstract With the aim of increasing online privacy, we present a novel, machine-learning based approach to blocking one of the three main ways website visitors are tracked online—canvas fingerprinting. Because the act of canvas fingerprinting uses, at its core, a JavaScript program, and because many of these programs are reused across the web, we are able to fit several machine learning models around a semantic representation of a potentially offending program, achieving accurate and robust classifiers. Our supervised learning approach is trained on a dataset we created by scraping roughly half a million websites using a custom Google Chrome extension storing information related to the canvas. Classification leverages our key insight that the images drawn by canvas fingerprinting programs have a facially distinct appearance, allowing us to manually classify files based on the images drawn; we take this approach one step further and train our classifiers not on the malleable images themselves, but on the more-difficult-to-change, underlying source code generating the images. As a result, ML-CB allows for more accurate tracker blocking.
{"title":"ML-CB: Machine Learning Canvas Block","authors":"Nathan Reitinger, Michelle L. Mazurek","doi":"10.2478/popets-2021-0056","DOIUrl":"https://doi.org/10.2478/popets-2021-0056","url":null,"abstract":"Abstract With the aim of increasing online privacy, we present a novel, machine-learning based approach to blocking one of the three main ways website visitors are tracked online—canvas fingerprinting. Because the act of canvas fingerprinting uses, at its core, a JavaScript program, and because many of these programs are reused across the web, we are able to fit several machine learning models around a semantic representation of a potentially offending program, achieving accurate and robust classifiers. Our supervised learning approach is trained on a dataset we created by scraping roughly half a million websites using a custom Google Chrome extension storing information related to the canvas. Classification leverages our key insight that the images drawn by canvas fingerprinting programs have a facially distinct appearance, allowing us to manually classify files based on the images drawn; we take this approach one step further and train our classifiers not on the malleable images themselves, but on the more-difficult-to-change, underlying source code generating the images. As a result, ML-CB allows for more accurate tracker blocking.","PeriodicalId":74556,"journal":{"name":"Proceedings on Privacy Enhancing Technologies. Privacy Enhancing Technologies Symposium","volume":"2021 1","pages":"453 - 473"},"PeriodicalIF":0.0,"publicationDate":"2021-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42995195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-27DOI: 10.2478/popets-2021-0038
Jenny Tang, Hannah Shoemaker, Ada Lerner, Eleanor Birrell
Abstract Recent privacy regulations such as GDPR and CCPA have emphasized the need for transparent, understandable privacy policies. This work investigates the role technical terms play in policy transparency. We identify potentially misunderstood technical terms that appear in privacy policies through a survey of current privacy policies and a pilot user study. We then run a user study on Amazon Mechanical Turk to evaluate whether users can accurately define these technical terms, to identify commonly held misconceptions, and to investigate how the use of technical terms affects users’ comfort with privacy policies. We find that technical terms are broadly misunderstood and that particular misconceptions are common. We also find that the use of technical terms affects users’ comfort with various privacy policies and their reported likeliness to accept those policies. We conclude that current use of technical terms in privacy policies poses a challenge to policy transparency and user privacy, and that companies should take steps to mitigate this effect.
{"title":"Defining Privacy: How Users Interpret Technical Terms in Privacy Policies","authors":"Jenny Tang, Hannah Shoemaker, Ada Lerner, Eleanor Birrell","doi":"10.2478/popets-2021-0038","DOIUrl":"https://doi.org/10.2478/popets-2021-0038","url":null,"abstract":"Abstract Recent privacy regulations such as GDPR and CCPA have emphasized the need for transparent, understandable privacy policies. This work investigates the role technical terms play in policy transparency. We identify potentially misunderstood technical terms that appear in privacy policies through a survey of current privacy policies and a pilot user study. We then run a user study on Amazon Mechanical Turk to evaluate whether users can accurately define these technical terms, to identify commonly held misconceptions, and to investigate how the use of technical terms affects users’ comfort with privacy policies. We find that technical terms are broadly misunderstood and that particular misconceptions are common. We also find that the use of technical terms affects users’ comfort with various privacy policies and their reported likeliness to accept those policies. We conclude that current use of technical terms in privacy policies poses a challenge to policy transparency and user privacy, and that companies should take steps to mitigate this effect.","PeriodicalId":74556,"journal":{"name":"Proceedings on Privacy Enhancing Technologies. Privacy Enhancing Technologies Symposium","volume":"2021 1","pages":"70 - 94"},"PeriodicalIF":0.0,"publicationDate":"2021-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42849258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-27DOI: 10.2478/popets-2021-0046
Ilia Iliashenko, Vincent Zucca
Abstract Fully homomorphic encryption (FHE) allows to compute any function on encrypted values. However, in practice, there is no universal FHE scheme that is effi-cient in all possible use cases. In this work, we show that FHE schemes suitable for arithmetic circuits (e.g. BGV or BFV) have a similar performance as FHE schemes for non-arithmetic circuits (TFHE) in basic comparison tasks such as less-than, maximum and minimum operations. Our implementation of the less-than function in the HElib library is up to 3 times faster than the prior work based on BGV/BFV. It allows to compare a pair of 64-bit integers in 11 milliseconds, sort 64 32-bit integers in 19 seconds and find the minimum of 64 32-bit integers in 9.5 seconds on an average laptop without multi-threading.
{"title":"Faster homomorphic comparison operations for BGV and BFV","authors":"Ilia Iliashenko, Vincent Zucca","doi":"10.2478/popets-2021-0046","DOIUrl":"https://doi.org/10.2478/popets-2021-0046","url":null,"abstract":"Abstract Fully homomorphic encryption (FHE) allows to compute any function on encrypted values. However, in practice, there is no universal FHE scheme that is effi-cient in all possible use cases. In this work, we show that FHE schemes suitable for arithmetic circuits (e.g. BGV or BFV) have a similar performance as FHE schemes for non-arithmetic circuits (TFHE) in basic comparison tasks such as less-than, maximum and minimum operations. Our implementation of the less-than function in the HElib library is up to 3 times faster than the prior work based on BGV/BFV. It allows to compare a pair of 64-bit integers in 11 milliseconds, sort 64 32-bit integers in 19 seconds and find the minimum of 64 32-bit integers in 9.5 seconds on an average laptop without multi-threading.","PeriodicalId":74556,"journal":{"name":"Proceedings on Privacy Enhancing Technologies. Privacy Enhancing Technologies Symposium","volume":"2021 1","pages":"246 - 264"},"PeriodicalIF":0.0,"publicationDate":"2021-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48028323","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-27DOI: 10.2478/popets-2021-0049
Peter Story, Daniel Smullen, Yaxing Yao, A. Acquisti, L. Cranor, N. Sadeh, F. Schaub
Abstract Privacy and security tools can help users protect themselves online. Unfortunately, people are often unaware of such tools, and have potentially harmful misconceptions about the protections provided by the tools they know about. Effectively encouraging the adoption of privacy tools requires insights into people’s tool awareness and understanding. Towards that end, we conducted a demographically-stratified survey of 500 US participants to measure their use of and perceptions about five web browsing-related tools: private browsing, VPNs, Tor Browser, ad blockers, and antivirus software. We asked about participants’ perceptions of the protections provided by these tools across twelve realistic scenarios. Our thematic analysis of participants’ responses revealed diverse forms of misconceptions. Some types of misconceptions were common across tools and scenarios, while others were associated with particular combinations of tools and scenarios. For example, some participants suggested that the privacy protections offered by private browsing, VPNs, and Tor Browser would also protect them from security threats – a misconception that might expose them to preventable risks. We anticipate that our findings will help researchers, tool designers, and privacy advocates educate the public about privacy- and security-enhancing technologies.
{"title":"Awareness, Adoption, and Misconceptions of Web Privacy Tools","authors":"Peter Story, Daniel Smullen, Yaxing Yao, A. Acquisti, L. Cranor, N. Sadeh, F. Schaub","doi":"10.2478/popets-2021-0049","DOIUrl":"https://doi.org/10.2478/popets-2021-0049","url":null,"abstract":"Abstract Privacy and security tools can help users protect themselves online. Unfortunately, people are often unaware of such tools, and have potentially harmful misconceptions about the protections provided by the tools they know about. Effectively encouraging the adoption of privacy tools requires insights into people’s tool awareness and understanding. Towards that end, we conducted a demographically-stratified survey of 500 US participants to measure their use of and perceptions about five web browsing-related tools: private browsing, VPNs, Tor Browser, ad blockers, and antivirus software. We asked about participants’ perceptions of the protections provided by these tools across twelve realistic scenarios. Our thematic analysis of participants’ responses revealed diverse forms of misconceptions. Some types of misconceptions were common across tools and scenarios, while others were associated with particular combinations of tools and scenarios. For example, some participants suggested that the privacy protections offered by private browsing, VPNs, and Tor Browser would also protect them from security threats – a misconception that might expose them to preventable risks. We anticipate that our findings will help researchers, tool designers, and privacy advocates educate the public about privacy- and security-enhancing technologies.","PeriodicalId":74556,"journal":{"name":"Proceedings on Privacy Enhancing Technologies. Privacy Enhancing Technologies Symposium","volume":"2021 1","pages":"308 - 333"},"PeriodicalIF":0.0,"publicationDate":"2021-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42662689","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}