Pub Date : 2021-07-23DOI: 10.2478/popets-2021-0081
Henry Hosseini, Martin Degeling, Christine Utz, Thomas Hupperich
Abstract Privacy policies have become a focal point of privacy research. With their goal to reflect the privacy practices of a website, service, or app, they are often the starting point for researchers who analyze the accuracy of claimed data practices, user understanding of practices, or control mechanisms for users. Due to vast differences in structure, presentation, and content, it is often challenging to extract privacy policies from online resources like websites for analysis. In the past, researchers have relied on scrapers tailored to the specific analysis or task, which complicates comparing results across different studies. To unify future research in this field, we developed a toolchain to process website privacy policies and prepare them for research purposes. The core part of this chain is a detector module for English and German, using natural language processing and machine learning to automatically determine whether given texts are privacy or cookie policies. We leverage multiple existing data sets to refine our approach, evaluate it on a recently published longitudinal corpus, and show that it contains a number of misclassified documents. We believe that unifying data preparation for the analysis of privacy policies can help make different studies more comparable and is a step towards more thorough analyses. In addition, we provide insights into common pitfalls that may lead to invalid analyses.
{"title":"Unifying Privacy Policy Detection","authors":"Henry Hosseini, Martin Degeling, Christine Utz, Thomas Hupperich","doi":"10.2478/popets-2021-0081","DOIUrl":"https://doi.org/10.2478/popets-2021-0081","url":null,"abstract":"Abstract Privacy policies have become a focal point of privacy research. With their goal to reflect the privacy practices of a website, service, or app, they are often the starting point for researchers who analyze the accuracy of claimed data practices, user understanding of practices, or control mechanisms for users. Due to vast differences in structure, presentation, and content, it is often challenging to extract privacy policies from online resources like websites for analysis. In the past, researchers have relied on scrapers tailored to the specific analysis or task, which complicates comparing results across different studies. To unify future research in this field, we developed a toolchain to process website privacy policies and prepare them for research purposes. The core part of this chain is a detector module for English and German, using natural language processing and machine learning to automatically determine whether given texts are privacy or cookie policies. We leverage multiple existing data sets to refine our approach, evaluate it on a recently published longitudinal corpus, and show that it contains a number of misclassified documents. We believe that unifying data preparation for the analysis of privacy policies can help make different studies more comparable and is a step towards more thorough analyses. In addition, we provide insights into common pitfalls that may lead to invalid analyses.","PeriodicalId":74556,"journal":{"name":"Proceedings on Privacy Enhancing Technologies. Privacy Enhancing Technologies Symposium","volume":"2021 1","pages":"480 - 499"},"PeriodicalIF":0.0,"publicationDate":"2021-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42650884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-23DOI: 10.2478/popets-2021-0079
Elizabeth C. Crites, Anna Lysyanskaya
Abstract Mercurial signatures are a useful building block for privacy-preserving schemes, such as anonymous credentials, delegatable anonymous credentials, and related applications. They allow a signature σ on a message m under a public key pk to be transformed into a signature σ′ on an equivalent message m′ under an equivalent public key pk′ for an appropriate notion of equivalence. For example, pk and pk′ may be unlinkable pseudonyms of the same user, and m and m′ may be unlinkable pseudonyms of a user to whom some capability is delegated. The only previously known construction of mercurial signatures suffers a severe limitation: in order to sign messages of length ℓ, the signer’s public key must also be of length ℓ. In this paper, we eliminate this restriction and provide an interactive signing protocol that admits messages of any length. We prove our scheme existentially unforgeable under chosen open message attacks (EUF-CoMA) under a variant of the asymmetric bilinear decisional Diffie-Hellman assumption (ABDDH).
{"title":"Mercurial Signatures for Variable-Length Messages","authors":"Elizabeth C. Crites, Anna Lysyanskaya","doi":"10.2478/popets-2021-0079","DOIUrl":"https://doi.org/10.2478/popets-2021-0079","url":null,"abstract":"Abstract Mercurial signatures are a useful building block for privacy-preserving schemes, such as anonymous credentials, delegatable anonymous credentials, and related applications. They allow a signature σ on a message m under a public key pk to be transformed into a signature σ′ on an equivalent message m′ under an equivalent public key pk′ for an appropriate notion of equivalence. For example, pk and pk′ may be unlinkable pseudonyms of the same user, and m and m′ may be unlinkable pseudonyms of a user to whom some capability is delegated. The only previously known construction of mercurial signatures suffers a severe limitation: in order to sign messages of length ℓ, the signer’s public key must also be of length ℓ. In this paper, we eliminate this restriction and provide an interactive signing protocol that admits messages of any length. We prove our scheme existentially unforgeable under chosen open message attacks (EUF-CoMA) under a variant of the asymmetric bilinear decisional Diffie-Hellman assumption (ABDDH).","PeriodicalId":74556,"journal":{"name":"Proceedings on Privacy Enhancing Technologies. Privacy Enhancing Technologies Symposium","volume":"2021 1","pages":"441 - 463"},"PeriodicalIF":0.0,"publicationDate":"2021-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47710716","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-23DOI: 10.2478/popets-2021-0072
Brandon Broadnax, Alexander Koch, Jeremias Mechler, Tobias Müller, J. Müller-Quade, Matthias Nagel
Abstract In practice, there are numerous settings where mutually distrusting parties need to perform distributed computations on their private inputs. For instance, participants in a first-price sealed-bid online auction do not want their bids to be disclosed. This problem can be addressed using secure multi-party computation (MPC), where parties can evaluate a publicly known function on their private inputs by executing a specific protocol that only reveals the correct output, but nothing else about the private inputs. Such distributed computations performed over the Internet are susceptible to remote hacks that may take place during the computation. As a consequence, sensitive data such as private bids may leak. All existing MPC protocols do not provide any protection against the consequences of such remote hacks. We present the first MPC protocols that protect the remotely hacked parties’ inputs and outputs from leaking. More specifically, unless the remote hack takes place before the party received its input or all parties are corrupted, a hacker is unable to learn the parties’ inputs and outputs, and is also unable to modify them. We achieve these strong (privacy) guarantees by utilizing the fact that in practice parties may not be susceptible to remote attacks at every point in time, but only while they are online, i.e. able to receive messages. To this end, we model communication via explicit channels. In particular, we introduce channels with an airgap switch (disconnect-able by the party in control of the switch), and unidirectional data diodes. These channels and their isolation properties, together with very few, similarly simple and plausibly remotely unhackable hardware modules serve as the main ingredient for attaining such strong security guarantees. In order to formalize these strong guarantees, we propose the UC with Fortified Security (UC#) framework, a variant of the Universal Composability (UC) framework.
{"title":"Fortified Multi-Party Computation: Taking Advantage of Simple Secure Hardware Modules","authors":"Brandon Broadnax, Alexander Koch, Jeremias Mechler, Tobias Müller, J. Müller-Quade, Matthias Nagel","doi":"10.2478/popets-2021-0072","DOIUrl":"https://doi.org/10.2478/popets-2021-0072","url":null,"abstract":"Abstract In practice, there are numerous settings where mutually distrusting parties need to perform distributed computations on their private inputs. For instance, participants in a first-price sealed-bid online auction do not want their bids to be disclosed. This problem can be addressed using secure multi-party computation (MPC), where parties can evaluate a publicly known function on their private inputs by executing a specific protocol that only reveals the correct output, but nothing else about the private inputs. Such distributed computations performed over the Internet are susceptible to remote hacks that may take place during the computation. As a consequence, sensitive data such as private bids may leak. All existing MPC protocols do not provide any protection against the consequences of such remote hacks. We present the first MPC protocols that protect the remotely hacked parties’ inputs and outputs from leaking. More specifically, unless the remote hack takes place before the party received its input or all parties are corrupted, a hacker is unable to learn the parties’ inputs and outputs, and is also unable to modify them. We achieve these strong (privacy) guarantees by utilizing the fact that in practice parties may not be susceptible to remote attacks at every point in time, but only while they are online, i.e. able to receive messages. To this end, we model communication via explicit channels. In particular, we introduce channels with an airgap switch (disconnect-able by the party in control of the switch), and unidirectional data diodes. These channels and their isolation properties, together with very few, similarly simple and plausibly remotely unhackable hardware modules serve as the main ingredient for attaining such strong security guarantees. In order to formalize these strong guarantees, we propose the UC with Fortified Security (UC#) framework, a variant of the Universal Composability (UC) framework.","PeriodicalId":74556,"journal":{"name":"Proceedings on Privacy Enhancing Technologies. Privacy Enhancing Technologies Symposium","volume":"2021 1","pages":"312 - 338"},"PeriodicalIF":0.0,"publicationDate":"2021-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42292145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-23DOI: 10.2478/popets-2021-0063
J. Ernst, Alexander Koch
Abstract A private stream aggregation (PSA) scheme is a protocol of n clients and one aggregator. At every time step, the clients send an encrypted value to the (untrusted) aggregator, who is able to compute the sum of all client values, but cannot learn the values of individual clients. One possible application of PSA is privacy-preserving smart-metering, where a power supplier can learn the total power consumption, but not the consumption of individual households. We construct a simple PSA scheme that supports labels and which we prove to be secure in the standard model. Labels are useful to restrict the access of the aggregator, because it prevents the aggregator from combining ciphertexts with different labels (or from different time-steps) and thus avoids leaking information about values of individual clients. The scheme is based on key-homomorphic pseudorandom functions (PRFs) as the only primitive, supports a large message space, scales well for a large number of users and has small ciphertexts. We provide an implementation of the scheme with a lattice-based key-homomorphic PRF (secure in the ROM) and measure the performance of the implementation. Furthermore, we discuss practical issues such as how to avoid a trusted party during the setup and how to cope with clients joining or leaving the system.
{"title":"Private Stream Aggregation with Labels in the Standard Model","authors":"J. Ernst, Alexander Koch","doi":"10.2478/popets-2021-0063","DOIUrl":"https://doi.org/10.2478/popets-2021-0063","url":null,"abstract":"Abstract A private stream aggregation (PSA) scheme is a protocol of n clients and one aggregator. At every time step, the clients send an encrypted value to the (untrusted) aggregator, who is able to compute the sum of all client values, but cannot learn the values of individual clients. One possible application of PSA is privacy-preserving smart-metering, where a power supplier can learn the total power consumption, but not the consumption of individual households. We construct a simple PSA scheme that supports labels and which we prove to be secure in the standard model. Labels are useful to restrict the access of the aggregator, because it prevents the aggregator from combining ciphertexts with different labels (or from different time-steps) and thus avoids leaking information about values of individual clients. The scheme is based on key-homomorphic pseudorandom functions (PRFs) as the only primitive, supports a large message space, scales well for a large number of users and has small ciphertexts. We provide an implementation of the scheme with a lattice-based key-homomorphic PRF (secure in the ROM) and measure the performance of the implementation. Furthermore, we discuss practical issues such as how to avoid a trusted party during the setup and how to cope with clients joining or leaving the system.","PeriodicalId":74556,"journal":{"name":"Proceedings on Privacy Enhancing Technologies. Privacy Enhancing Technologies Symposium","volume":"2021 1","pages":"117 - 138"},"PeriodicalIF":0.0,"publicationDate":"2021-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44776528","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-23DOI: 10.2478/popets-2021-0076
Logan Arkema, Micah Sherr
Abstract Computer applications often leave traces or residues that enable forensic examiners to gain a detailed understanding of the actions a user performed on a computer. Such digital breadcrumbs are left by a large variety of applications, potentially (and indeed likely) unbeknownst to their users. This paper presents the concept of residue-free computing in which a user can operate any existing application installed on their computer in a mode that prevents trace data from being recorded to disk, thus frustrating the forensic process and enabling more privacy-preserving computing. In essence, residue-free computing provides an “incognito mode” for any application. We introduce our implementation of residue-free computing, ResidueFree, and motivate ResidueFree by inventorying the potentially sensitive and privacy-invasive residue left by popular applications. We demonstrate that ResidueFree allows users to operate these applications without leaving trace data, while incurring modest performance overheads.
{"title":"Residue-Free Computing","authors":"Logan Arkema, Micah Sherr","doi":"10.2478/popets-2021-0076","DOIUrl":"https://doi.org/10.2478/popets-2021-0076","url":null,"abstract":"Abstract Computer applications often leave traces or residues that enable forensic examiners to gain a detailed understanding of the actions a user performed on a computer. Such digital breadcrumbs are left by a large variety of applications, potentially (and indeed likely) unbeknownst to their users. This paper presents the concept of residue-free computing in which a user can operate any existing application installed on their computer in a mode that prevents trace data from being recorded to disk, thus frustrating the forensic process and enabling more privacy-preserving computing. In essence, residue-free computing provides an “incognito mode” for any application. We introduce our implementation of residue-free computing, ResidueFree, and motivate ResidueFree by inventorying the potentially sensitive and privacy-invasive residue left by popular applications. We demonstrate that ResidueFree allows users to operate these applications without leaving trace data, while incurring modest performance overheads.","PeriodicalId":74556,"journal":{"name":"Proceedings on Privacy Enhancing Technologies. Privacy Enhancing Technologies Symposium","volume":"2021 1","pages":"389 - 405"},"PeriodicalIF":0.0,"publicationDate":"2021-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43972613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-23DOI: 10.2478/popets-2021-0066
Alexandra Dirksen, David Klein, Robert Michael, Tilman Stehr, Konrad Rieck, Martin Johns
Abstract HTTPS is a cornerstone of privacy in the modern Web. The public key infrastructure underlying HTTPS, however, is a frequent target of attacks. In several cases, forged certificates have been issued by compromised Certificate Authorities (CA) and used to spy on users at large scale. While the concept of Certificate Transparency (CT) provides a means for detecting such forgeries, it builds on a distributed system of CT logs whose correctness is still insufficiently protected. By compromising a certificate authority and the corresponding log, a covert adversary can still issue rogue certificates unnoticed. We introduce LogPicker, a novel protocol for strengthening the public key infrastructure of HTTPS. LogPicker enables a pool of CT logs to collaborate, where a randomly selected log includes the certificate while the rest witness and testify the certificate issuance process. As a result, CT logs become capable of auditing the log in charge independently without the need for a trusted third party. This auditing forces an attacker to control each participating witness, which significantly raises the bar for issuing rogue certificates. LogPicker is efficient and designed to be deployed incrementally, allowing a smooth transition towards a more secure Web.
{"title":"LogPicker: Strengthening Certificate Transparency Against Covert Adversaries","authors":"Alexandra Dirksen, David Klein, Robert Michael, Tilman Stehr, Konrad Rieck, Martin Johns","doi":"10.2478/popets-2021-0066","DOIUrl":"https://doi.org/10.2478/popets-2021-0066","url":null,"abstract":"Abstract HTTPS is a cornerstone of privacy in the modern Web. The public key infrastructure underlying HTTPS, however, is a frequent target of attacks. In several cases, forged certificates have been issued by compromised Certificate Authorities (CA) and used to spy on users at large scale. While the concept of Certificate Transparency (CT) provides a means for detecting such forgeries, it builds on a distributed system of CT logs whose correctness is still insufficiently protected. By compromising a certificate authority and the corresponding log, a covert adversary can still issue rogue certificates unnoticed. We introduce LogPicker, a novel protocol for strengthening the public key infrastructure of HTTPS. LogPicker enables a pool of CT logs to collaborate, where a randomly selected log includes the certificate while the rest witness and testify the certificate issuance process. As a result, CT logs become capable of auditing the log in charge independently without the need for a trusted third party. This auditing forces an attacker to control each participating witness, which significantly raises the bar for issuing rogue certificates. LogPicker is efficient and designed to be deployed incrementally, allowing a smooth transition towards a more secure Web.","PeriodicalId":74556,"journal":{"name":"Proceedings on Privacy Enhancing Technologies. Privacy Enhancing Technologies Symposium","volume":"2021 1","pages":"184 - 202"},"PeriodicalIF":0.0,"publicationDate":"2021-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47181584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-23DOI: 10.2478/popets-2021-0071
C. Mouchet, J. Troncoso-Pastoriza, Jean-Philippe Bossuat, J. Hubaux
Abstract We propose and evaluate a secure-multiparty-computation (MPC) solution in the semi-honest model with dishonest majority that is based on multiparty homomorphic encryption (MHE). To support our solution, we introduce a multiparty version of the Brakerski-Fan-Vercauteren homomorphic cryptosystem and implement it in an open-source library. MHE-based MPC solutions have several advantages: Their transcript is public, their o~ine phase is compact, and their circuit-evaluation procedure is noninteractive. By exploiting these properties, the communication complexity of MPC tasks is reduced from quadratic to linear in the number of parties, thus enabling secure computation among potentially thousands of parties and in a broad variety of computing paradigms, from the traditional peer-to-peer setting to cloud-outsourcing and smart-contract technologies. MHE-based approaches can also outperform the state-of-the-art solutions, even for a small number of parties. We demonstrate this for three circuits: private input selection with application to private-information retrieval, component-wise vector multiplication with application to private-set intersection, and Beaver multiplication triples generation. For the first circuit, privately selecting one input among eight thousand parties’ (of 32 KB each) requires only 1.31 MB of communication per party and completes in 61.7 seconds. For the second circuit with eight parties, our approach is 8.6 times faster and requires 39.3 times less communication than the current methods. For the third circuit and ten parties, our approach generates 20 times more triples per second while requiring 136 times less communication per-triple than an approach based on oblivious transfer. We implemented our scheme in the Lattigo library and open-sourced the code at github.com/ldsec/lattigo.
{"title":"Multiparty Homomorphic Encryption from Ring-Learning-with-Errors","authors":"C. Mouchet, J. Troncoso-Pastoriza, Jean-Philippe Bossuat, J. Hubaux","doi":"10.2478/popets-2021-0071","DOIUrl":"https://doi.org/10.2478/popets-2021-0071","url":null,"abstract":"Abstract We propose and evaluate a secure-multiparty-computation (MPC) solution in the semi-honest model with dishonest majority that is based on multiparty homomorphic encryption (MHE). To support our solution, we introduce a multiparty version of the Brakerski-Fan-Vercauteren homomorphic cryptosystem and implement it in an open-source library. MHE-based MPC solutions have several advantages: Their transcript is public, their o~ine phase is compact, and their circuit-evaluation procedure is noninteractive. By exploiting these properties, the communication complexity of MPC tasks is reduced from quadratic to linear in the number of parties, thus enabling secure computation among potentially thousands of parties and in a broad variety of computing paradigms, from the traditional peer-to-peer setting to cloud-outsourcing and smart-contract technologies. MHE-based approaches can also outperform the state-of-the-art solutions, even for a small number of parties. We demonstrate this for three circuits: private input selection with application to private-information retrieval, component-wise vector multiplication with application to private-set intersection, and Beaver multiplication triples generation. For the first circuit, privately selecting one input among eight thousand parties’ (of 32 KB each) requires only 1.31 MB of communication per party and completes in 61.7 seconds. For the second circuit with eight parties, our approach is 8.6 times faster and requires 39.3 times less communication than the current methods. For the third circuit and ten parties, our approach generates 20 times more triples per second while requiring 136 times less communication per-triple than an approach based on oblivious transfer. We implemented our scheme in the Lattigo library and open-sourced the code at github.com/ldsec/lattigo.","PeriodicalId":74556,"journal":{"name":"Proceedings on Privacy Enhancing Technologies. Privacy Enhancing Technologies Symposium","volume":"2021 1","pages":"291 - 311"},"PeriodicalIF":0.0,"publicationDate":"2021-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42438571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Recent research and articles in popular press have raised concerns about the privacy risks that smart home devices can create for incidental users—people who encounter smart home devices that are owned, controlled, and configured by someone else. In this work, we present the results of a user-centered investigation that explores incidental users’ experiences and the tensions that arise between device owners and incidental users. We conducted five focus group sessions through which we identified specific contexts in which someone might encounter other people’s smart home devices and the main concerns device owners and incidental users have in such situations. We used these findings to inform the design of a survey instrument, which we deployed to a demographically representative sample of 386 adults in the United States. Through this survey, we can better understand which contexts and concerns are most bothersome and how often device owners are willing to accommodate incidental users’ privacy preferences. We found some surprising trends in terms of what people are most worried about and what actions they are willing to take. For example, while participants who did not own devices themselves were often uncomfortable imagining them in their own homes, they were not as concerned about being affected by such devices in homes that they entered as part of their jobs. Participants showed interest in privacy solutions that might have a technical implementation component, but also frequently envisioned an open dialogue between incidental users and device owners to negotiate privacy accommodations.
{"title":"“I would have to evaluate their objections”: Privacy tensions between smart home device owners and incidental users","authors":"Camille Cobb, Sruti Bhagavatula, K. Garrett, Alison Hoffman, Varun Rao, Lujo Bauer","doi":"10.2478/popets-2021-0060","DOIUrl":"https://doi.org/10.2478/popets-2021-0060","url":null,"abstract":"Abstract Recent research and articles in popular press have raised concerns about the privacy risks that smart home devices can create for incidental users—people who encounter smart home devices that are owned, controlled, and configured by someone else. In this work, we present the results of a user-centered investigation that explores incidental users’ experiences and the tensions that arise between device owners and incidental users. We conducted five focus group sessions through which we identified specific contexts in which someone might encounter other people’s smart home devices and the main concerns device owners and incidental users have in such situations. We used these findings to inform the design of a survey instrument, which we deployed to a demographically representative sample of 386 adults in the United States. Through this survey, we can better understand which contexts and concerns are most bothersome and how often device owners are willing to accommodate incidental users’ privacy preferences. We found some surprising trends in terms of what people are most worried about and what actions they are willing to take. For example, while participants who did not own devices themselves were often uncomfortable imagining them in their own homes, they were not as concerned about being affected by such devices in homes that they entered as part of their jobs. Participants showed interest in privacy solutions that might have a technical implementation component, but also frequently envisioned an open dialogue between incidental users and device owners to negotiate privacy accommodations.","PeriodicalId":74556,"journal":{"name":"Proceedings on Privacy Enhancing Technologies. Privacy Enhancing Technologies Symposium","volume":"2021 1","pages":"54 - 75"},"PeriodicalIF":0.0,"publicationDate":"2021-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42256819","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-23DOI: 10.2478/popets-2021-0068
Aditya Hegde, Helen Möllering, T. Schneider, Hossein Yalame
Abstract Clustering is a popular unsupervised machine learning technique that groups similar input elements into clusters. It is used in many areas ranging from business analysis to health care. In many of these applications, sensitive information is clustered that should not be leaked. Moreover, nowadays it is often required to combine data from multiple sources to increase the quality of the analysis as well as to outsource complex computation to powerful cloud servers. This calls for efficient privacy-preserving clustering. In this work, we systematically analyze the state-of-the-art in privacy-preserving clustering. We implement and benchmark today’s four most efficient fully private clustering protocols by Cheon et al. (SAC’19), Meng et al. (ArXiv’19), Mohassel et al. (PETS’20), and Bozdemir et al. (ASIACCS’21) with respect to communication, computation, and clustering quality. We compare them, assess their limitations for a practical use in real-world applications, and conclude with open challenges.
{"title":"SoK: Efficient Privacy-preserving Clustering","authors":"Aditya Hegde, Helen Möllering, T. Schneider, Hossein Yalame","doi":"10.2478/popets-2021-0068","DOIUrl":"https://doi.org/10.2478/popets-2021-0068","url":null,"abstract":"Abstract Clustering is a popular unsupervised machine learning technique that groups similar input elements into clusters. It is used in many areas ranging from business analysis to health care. In many of these applications, sensitive information is clustered that should not be leaked. Moreover, nowadays it is often required to combine data from multiple sources to increase the quality of the analysis as well as to outsource complex computation to powerful cloud servers. This calls for efficient privacy-preserving clustering. In this work, we systematically analyze the state-of-the-art in privacy-preserving clustering. We implement and benchmark today’s four most efficient fully private clustering protocols by Cheon et al. (SAC’19), Meng et al. (ArXiv’19), Mohassel et al. (PETS’20), and Bozdemir et al. (ASIACCS’21) with respect to communication, computation, and clustering quality. We compare them, assess their limitations for a practical use in real-world applications, and conclude with open challenges.","PeriodicalId":74556,"journal":{"name":"Proceedings on Privacy Enhancing Technologies. Privacy Enhancing Technologies Symposium","volume":"2021 1","pages":"225 - 248"},"PeriodicalIF":0.0,"publicationDate":"2021-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49249498","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-23DOI: 10.2478/popets-2021-0073
T. Veugen, Mark Abspoel
Abstract We consider secure integer division within a secret-sharing based secure multi-party computation framework, where the dividend is secret-shared, but the divisor is privately known to a single party. We mention various applications where this situation arises. We give a solution within the passive security model, and extend this to the active model, achieving a complexity linear in the input bit length. We benchmark both solutions using the well-known MP-SPDZ framework in a cloud environment. Our integer division protocol with a private divisor clearly outperforms the secret divisor solution, both in runtime and communication complexity.
{"title":"Secure integer division with a private divisor","authors":"T. Veugen, Mark Abspoel","doi":"10.2478/popets-2021-0073","DOIUrl":"https://doi.org/10.2478/popets-2021-0073","url":null,"abstract":"Abstract We consider secure integer division within a secret-sharing based secure multi-party computation framework, where the dividend is secret-shared, but the divisor is privately known to a single party. We mention various applications where this situation arises. We give a solution within the passive security model, and extend this to the active model, achieving a complexity linear in the input bit length. We benchmark both solutions using the well-known MP-SPDZ framework in a cloud environment. Our integer division protocol with a private divisor clearly outperforms the secret divisor solution, both in runtime and communication complexity.","PeriodicalId":74556,"journal":{"name":"Proceedings on Privacy Enhancing Technologies. Privacy Enhancing Technologies Symposium","volume":"2021 1","pages":"339 - 349"},"PeriodicalIF":0.0,"publicationDate":"2021-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42109795","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}