首页 > 最新文献

Proceedings on Privacy Enhancing Technologies. Privacy Enhancing Technologies Symposium最新文献

英文 中文
Proof-of-Vax: Studying User Preferences and Perception of Covid Vaccination Certificates Vax证明:研究用户对新冠肺炎疫苗接种证书的偏好和认知
Marvin Kowalewski, Franziska Herbert, Theodor Schnitzler, Markus Dürmuth
Abstract Digital tools play an important role in fighting the current global COVID-19 pandemic. We conducted a representative online study in Germany on a sample of 599 participants to evaluate the user perception of vaccination certificates. We investigated five different variants of vaccination certificates based on deployed and planned designs in a between-group design, including paper-based and app-based variants. Our main results show that the willingness to use and adopt vaccination certificates is generally high. Overall, paper-based vaccination certificates were favored over app-based solutions. The willingness to use digital apps decreased significantly by a higher disposition to privacy and increased by higher worries about the pandemic and acceptance of the coronavirus vaccination. Vaccination certificates resemble an interesting use case for studying privacy perceptions for health-related data. We hope that our work will educate the currently ongoing design of vaccination certificates, give us deeper insights into the privacy of health-related data and apps, and prepare us for future potential applications of vaccination certificates and health apps in general.
数字工具在抗击当前全球COVID-19大流行中发挥着重要作用。我们在德国对599名参与者进行了一项有代表性的在线研究,以评估用户对疫苗接种证书的看法。我们在组间设计中基于部署和计划设计调查了五种不同的疫苗接种证书变体,包括基于纸张和基于应用程序的变体。我们的主要结果表明,使用和采用预防接种证书的意愿普遍较高。总体而言,纸质疫苗接种证书比基于应用程序的解决方案更受青睐。使用数字应用程序的意愿因隐私倾向的提高而显著下降,因对大流行的担忧和对冠状病毒疫苗接种的接受程度的提高而增加。疫苗接种证书类似于研究与健康相关数据的隐私感知的一个有趣用例。我们希望我们的工作能够指导目前正在进行的疫苗接种证书设计,让我们更深入地了解与健康相关的数据和应用程序的隐私,并为未来疫苗接种证书和健康应用程序的潜在应用做好准备。
{"title":"Proof-of-Vax: Studying User Preferences and Perception of Covid Vaccination Certificates","authors":"Marvin Kowalewski, Franziska Herbert, Theodor Schnitzler, Markus Dürmuth","doi":"10.2478/popets-2022-0016","DOIUrl":"https://doi.org/10.2478/popets-2022-0016","url":null,"abstract":"Abstract Digital tools play an important role in fighting the current global COVID-19 pandemic. We conducted a representative online study in Germany on a sample of 599 participants to evaluate the user perception of vaccination certificates. We investigated five different variants of vaccination certificates based on deployed and planned designs in a between-group design, including paper-based and app-based variants. Our main results show that the willingness to use and adopt vaccination certificates is generally high. Overall, paper-based vaccination certificates were favored over app-based solutions. The willingness to use digital apps decreased significantly by a higher disposition to privacy and increased by higher worries about the pandemic and acceptance of the coronavirus vaccination. Vaccination certificates resemble an interesting use case for studying privacy perceptions for health-related data. We hope that our work will educate the currently ongoing design of vaccination certificates, give us deeper insights into the privacy of health-related data and apps, and prepare us for future potential applications of vaccination certificates and health apps in general.","PeriodicalId":74556,"journal":{"name":"Proceedings on Privacy Enhancing Technologies. Privacy Enhancing Technologies Symposium","volume":"2022 1","pages":"317 - 338"},"PeriodicalIF":0.0,"publicationDate":"2021-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48508195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Privacy-preserving training of tree ensembles over continuous data 连续数据上树集成的隐私保护训练
Samuel Adams, Chaitali Choudhary, Martine De Cock, Rafael Dowsley, David Melanson, Anderson C. A. Nascimento, Davis Railsback, Jianwei Shen
Abstract Most existing Secure Multi-Party Computation (MPC) protocols for privacy-preserving training of decision trees over distributed data assume that the features are categorical. In real-life applications, features are often numerical. The standard “in the clear” algorithm to grow decision trees on data with continuous values requires sorting of training examples for each feature in the quest for an optimal cut-point in the range of feature values in each node. Sorting is an expensive operation in MPC, hence finding secure protocols that avoid such an expensive step is a relevant problem in privacy-preserving machine learning. In this paper we propose three more efficient alternatives for secure training of decision tree based models on data with continuous features, namely: (1) secure discretization of the data, followed by secure training of a decision tree over the discretized data; (2) secure discretization of the data, followed by secure training of a random forest over the discretized data; and (3) secure training of extremely randomized trees (“extra-trees”) on the original data. Approaches (2) and (3) both involve randomizing feature choices. In addition, in approach (3) cut-points are chosen randomly as well, thereby alleviating the need to sort or to discretize the data up front. We implemented all proposed solutions in the semi-honest setting with additive secret sharing based MPC. In addition to mathematically proving that all proposed approaches are correct and secure, we experimentally evaluated and compared them in terms of classification accuracy and runtime. We privately train tree ensembles over data sets with thousands of instances or features in a few minutes, with accuracies that are at par with those obtained in the clear. This makes our solution more efficient than the existing approaches, which are based on oblivious sorting.
现有的大多数用于分布式数据决策树隐私保护训练的安全多方计算(MPC)协议都假设特征是分类的。在实际应用中,特征通常是数字的。在具有连续值的数据上生长决策树的标准“清晰”算法需要对每个特征的训练样例进行排序,以在每个节点的特征值范围内寻求最佳切割点。在MPC中,排序是一项昂贵的操作,因此在保护隐私的机器学习中,找到避免这一昂贵步骤的安全协议是一个相关问题。在本文中,我们提出了三种更有效的基于连续特征数据的决策树模型的安全训练方案,即:(1)对数据进行安全离散化,然后在离散化的数据上对决策树进行安全训练;(2)对数据进行安全离散化,然后对离散化后的数据进行随机森林的安全训练;(3)在原始数据上安全训练极度随机树(extra-trees)。方法(2)和(3)都涉及随机特征选择。此外,在方法(3)中,切割点也是随机选择的,从而减轻了预先对数据进行排序或离散化的需要。我们在半诚实环境下使用基于加性秘密共享的MPC实现了所有提出的解决方案。除了从数学上证明所有提出的方法都是正确和安全的之外,我们还从分类精度和运行时间方面对它们进行了实验评估和比较。我们在几分钟内私下训练具有数千个实例或特征的数据集上的树集合,其准确性与在clear中获得的精度相当。这使得我们的解决方案比现有的基于遗忘排序的方法更有效。
{"title":"Privacy-preserving training of tree ensembles over continuous data","authors":"Samuel Adams, Chaitali Choudhary, Martine De Cock, Rafael Dowsley, David Melanson, Anderson C. A. Nascimento, Davis Railsback, Jianwei Shen","doi":"10.2478/popets-2022-0042","DOIUrl":"https://doi.org/10.2478/popets-2022-0042","url":null,"abstract":"Abstract Most existing Secure Multi-Party Computation (MPC) protocols for privacy-preserving training of decision trees over distributed data assume that the features are categorical. In real-life applications, features are often numerical. The standard “in the clear” algorithm to grow decision trees on data with continuous values requires sorting of training examples for each feature in the quest for an optimal cut-point in the range of feature values in each node. Sorting is an expensive operation in MPC, hence finding secure protocols that avoid such an expensive step is a relevant problem in privacy-preserving machine learning. In this paper we propose three more efficient alternatives for secure training of decision tree based models on data with continuous features, namely: (1) secure discretization of the data, followed by secure training of a decision tree over the discretized data; (2) secure discretization of the data, followed by secure training of a random forest over the discretized data; and (3) secure training of extremely randomized trees (“extra-trees”) on the original data. Approaches (2) and (3) both involve randomizing feature choices. In addition, in approach (3) cut-points are chosen randomly as well, thereby alleviating the need to sort or to discretize the data up front. We implemented all proposed solutions in the semi-honest setting with additive secret sharing based MPC. In addition to mathematically proving that all proposed approaches are correct and secure, we experimentally evaluated and compared them in terms of classification accuracy and runtime. We privately train tree ensembles over data sets with thousands of instances or features in a few minutes, with accuracies that are at par with those obtained in the clear. This makes our solution more efficient than the existing approaches, which are based on oblivious sorting.","PeriodicalId":74556,"journal":{"name":"Proceedings on Privacy Enhancing Technologies. Privacy Enhancing Technologies Symposium","volume":"2022 1","pages":"205 - 226"},"PeriodicalIF":0.0,"publicationDate":"2021-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44040088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Privacy Preference Signals: Past, Present and Future 隐私偏好信号:过去、现在和未来
M. Hils, Daniel W. Woods, Rainer Böhme
Abstract Privacy preference signals are digital representations of how users want their personal data to be processed. Such signals must be adopted by both the sender (users) and intended recipients (data processors). Adoption represents a coordination problem that remains unsolved despite efforts dating back to the 1990s. Browsers implemented standards like the Platform for Privacy Preferences (P3P) and Do Not Track (DNT), but vendors profiting from personal data faced few incentives to receive and respect the expressed wishes of data subjects. In the wake of recent privacy laws, a coalition of AdTech firms published the Transparency and Consent Framework (TCF), which defines an optin consent signal. This paper integrates post-GDPR developments into the wider history of privacy preference signals. Our main contribution is a high-frequency longitudinal study describing how TCF signal gained dominance as of February 2021. We explore which factors correlate with adoption at the website level. Both the number of third parties on a website and the presence of Google Ads are associated with higher adoption of TCF. Further, we show that vendors acted as early adopters of TCF 2.0 and provide two case-studies describing how Consent Management Providers shifted existing customers to TCF 2.0. We sketch ways forward for a pro-privacy signal.
隐私偏好信号是用户希望如何处理其个人数据的数字表示。这样的信号必须被发送者(用户)和预期的接收者(数据处理者)采用。收养是一个协调问题,尽管从20世纪90年代就开始努力,但仍未得到解决。浏览器实现了隐私偏好平台(P3P)和禁止跟踪(DNT)等标准,但从个人数据中获利的供应商几乎没有动力接受和尊重数据主体表达的意愿。在最近的隐私法出台之后,一个由广告科技公司组成的联盟发布了透明度和同意框架(TCF),其中定义了一个可选择的同意信号。本文将gdpr后的发展整合到更广泛的隐私偏好信号历史中。我们的主要贡献是高频纵向研究,描述了TCF信号如何在2021年2月占据主导地位。我们探索哪些因素与网站层面的采用相关。网站上第三方的数量和b谷歌广告的出现都与更高的TCF采用率有关。此外,我们展示了供应商作为TCF 2.0的早期采用者,并提供了两个案例研究,描述了同意管理提供商如何将现有客户转移到TCF 2.0。我们为支持隐私的信号勾画了前进的道路。
{"title":"Privacy Preference Signals: Past, Present and Future","authors":"M. Hils, Daniel W. Woods, Rainer Böhme","doi":"10.2478/popets-2021-0069","DOIUrl":"https://doi.org/10.2478/popets-2021-0069","url":null,"abstract":"Abstract Privacy preference signals are digital representations of how users want their personal data to be processed. Such signals must be adopted by both the sender (users) and intended recipients (data processors). Adoption represents a coordination problem that remains unsolved despite efforts dating back to the 1990s. Browsers implemented standards like the Platform for Privacy Preferences (P3P) and Do Not Track (DNT), but vendors profiting from personal data faced few incentives to receive and respect the expressed wishes of data subjects. In the wake of recent privacy laws, a coalition of AdTech firms published the Transparency and Consent Framework (TCF), which defines an optin consent signal. This paper integrates post-GDPR developments into the wider history of privacy preference signals. Our main contribution is a high-frequency longitudinal study describing how TCF signal gained dominance as of February 2021. We explore which factors correlate with adoption at the website level. Both the number of third parties on a website and the presence of Google Ads are associated with higher adoption of TCF. Further, we show that vendors acted as early adopters of TCF 2.0 and provide two case-studies describing how Consent Management Providers shifted existing customers to TCF 2.0. We sketch ways forward for a pro-privacy signal.","PeriodicalId":74556,"journal":{"name":"Proceedings on Privacy Enhancing Technologies. Privacy Enhancing Technologies Symposium","volume":"2021 1","pages":"249 - 269"},"PeriodicalIF":0.0,"publicationDate":"2021-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48772269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
User-Level Label Leakage from Gradients in Federated Learning 联邦学习中梯度的用户级标签泄漏
A. Wainakh, Fabrizio G. Ventola, Till Müßig, Jens Keim, Carlos Garcia Cordero, Ephraim Zimmer, Tim Grube, K. Kersting, M. Mühlhäuser
Abstract Federated learning enables multiple users to build a joint model by sharing their model updates (gradients), while their raw data remains local on their devices. In contrast to the common belief that this provides privacy benefits, we here add to the very recent results on privacy risks when sharing gradients. Specifically, we investigate Label Leakage from Gradients (LLG), a novel attack to extract the labels of the users’ training data from their shared gradients. The attack exploits the direction and magnitude of gradients to determine the presence or absence of any label. LLG is simple yet effective, capable of leaking potential sensitive information represented by labels, and scales well to arbitrary batch sizes and multiple classes. We mathematically and empirically demonstrate the validity of the attack under different settings. Moreover, empirical results show that LLG successfully extracts labels with high accuracy at the early stages of model training. We also discuss different defense mechanisms against such leakage. Our findings suggest that gradient compression is a practical technique to mitigate the attack.
联邦学习使多个用户能够通过共享他们的模型更新(梯度)来构建联合模型,而他们的原始数据在他们的设备上保持本地。与普遍认为这提供了隐私好处的看法相反,我们在这里添加了关于共享梯度时隐私风险的最新结果。具体来说,我们研究了梯度标签泄漏(LLG),这是一种从用户的共享梯度中提取用户训练数据标签的新攻击。攻击利用梯度的方向和大小来确定是否存在任何标签。LLG简单而有效,能够泄漏由标签表示的潜在敏感信息,并且可以很好地扩展到任意批处理大小和多个类。我们在数学上和经验上证明了攻击在不同设置下的有效性。此外,实证结果表明,在模型训练的早期阶段,LLG成功地以较高的准确率提取了标签。我们还讨论了针对此类泄漏的不同防御机制。我们的研究结果表明,梯度压缩是一种减轻攻击的实用技术。
{"title":"User-Level Label Leakage from Gradients in Federated Learning","authors":"A. Wainakh, Fabrizio G. Ventola, Till Müßig, Jens Keim, Carlos Garcia Cordero, Ephraim Zimmer, Tim Grube, K. Kersting, M. Mühlhäuser","doi":"10.2478/popets-2022-0043","DOIUrl":"https://doi.org/10.2478/popets-2022-0043","url":null,"abstract":"Abstract Federated learning enables multiple users to build a joint model by sharing their model updates (gradients), while their raw data remains local on their devices. In contrast to the common belief that this provides privacy benefits, we here add to the very recent results on privacy risks when sharing gradients. Specifically, we investigate Label Leakage from Gradients (LLG), a novel attack to extract the labels of the users’ training data from their shared gradients. The attack exploits the direction and magnitude of gradients to determine the presence or absence of any label. LLG is simple yet effective, capable of leaking potential sensitive information represented by labels, and scales well to arbitrary batch sizes and multiple classes. We mathematically and empirically demonstrate the validity of the attack under different settings. Moreover, empirical results show that LLG successfully extracts labels with high accuracy at the early stages of model training. We also discuss different defense mechanisms against such leakage. Our findings suggest that gradient compression is a practical technique to mitigate the attack.","PeriodicalId":74556,"journal":{"name":"Proceedings on Privacy Enhancing Technologies. Privacy Enhancing Technologies Symposium","volume":"2022 1","pages":"227 - 244"},"PeriodicalIF":0.0,"publicationDate":"2021-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41858773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Blocking Without Breaking: Identification and Mitigation of Non-Essential IoT Traffic 不中断的阻塞:非必要物联网流量的识别和缓解
A. Mandalari, Daniel J. Dubois, Roman Kolcun, Muhammad Talha Paracha, H. Haddadi, D. Choffnes
Abstract Despite the prevalence of Internet of Things (IoT) devices, there is little information about the purpose and risks of the Internet traffic these devices generate, and consumers have limited options for controlling those risks. A key open question is whether one can mitigate these risks by automatically blocking some of the Internet connections from IoT devices, without rendering the devices inoperable. In this paper, we address this question by developing a rigorous methodology that relies on automated IoT-device experimentation to reveal which network connections (and the information they expose) are essential, and which are not. We further develop strategies to automatically classify network traffic destinations as either required (i.e., their traffic is essential for devices to work properly) or not, hence allowing firewall rules to block traffic sent to non-required destinations without breaking the functionality of the device. We find that indeed 16 among the 31 devices we tested have at least one blockable non-required destination, with the maximum number of blockable destinations for a device being 11. We further analyze the destination of network traffic and find that all third parties observed in our experiments are blockable, while first and support parties are neither uniformly required or non-required. Finally, we demonstrate the limitations of existing blocklists on IoT traffic, propose a set of guidelines for automatically limiting non-essential IoT traffic, and we develop a prototype system that implements these guidelines.
尽管物联网(IoT)设备的普及,但关于这些设备产生的互联网流量的目的和风险的信息很少,消费者控制这些风险的选择有限。一个关键的悬而未决的问题是,是否可以通过自动阻止物联网设备的一些互联网连接来降低这些风险,而不会使设备无法运行。在本文中,我们通过开发一种严格的方法来解决这个问题,该方法依赖于自动化的物联网设备实验来揭示哪些网络连接(以及它们暴露的信息)是必不可少的,哪些不是。我们进一步开发策略,将网络流量目的地自动分类为需要(即,它们的流量对设备正常工作至关重要)或不需要,从而允许防火墙规则阻止发送到非需要目的地的流量,而不会破坏设备的功能。我们发现,在我们测试的31台设备中,确实有16台至少有一个可阻塞的非必需目的地,设备的可阻塞目的地的最大数量为11个。我们进一步分析了网络流量的目的地,发现在我们的实验中观察到的所有第三方都是可阻塞的,而第一方和支持方既不是统一需要的,也不是不需要的。最后,我们展示了现有的物联网流量封锁列表的局限性,提出了一套自动限制非必要物联网流量的指南,并开发了一个实现这些指南的原型系统。
{"title":"Blocking Without Breaking: Identification and Mitigation of Non-Essential IoT Traffic","authors":"A. Mandalari, Daniel J. Dubois, Roman Kolcun, Muhammad Talha Paracha, H. Haddadi, D. Choffnes","doi":"10.2478/popets-2021-0075","DOIUrl":"https://doi.org/10.2478/popets-2021-0075","url":null,"abstract":"Abstract Despite the prevalence of Internet of Things (IoT) devices, there is little information about the purpose and risks of the Internet traffic these devices generate, and consumers have limited options for controlling those risks. A key open question is whether one can mitigate these risks by automatically blocking some of the Internet connections from IoT devices, without rendering the devices inoperable. In this paper, we address this question by developing a rigorous methodology that relies on automated IoT-device experimentation to reveal which network connections (and the information they expose) are essential, and which are not. We further develop strategies to automatically classify network traffic destinations as either required (i.e., their traffic is essential for devices to work properly) or not, hence allowing firewall rules to block traffic sent to non-required destinations without breaking the functionality of the device. We find that indeed 16 among the 31 devices we tested have at least one blockable non-required destination, with the maximum number of blockable destinations for a device being 11. We further analyze the destination of network traffic and find that all third parties observed in our experiments are blockable, while first and support parties are neither uniformly required or non-required. Finally, we demonstrate the limitations of existing blocklists on IoT traffic, propose a set of guidelines for automatically limiting non-essential IoT traffic, and we develop a prototype system that implements these guidelines.","PeriodicalId":74556,"journal":{"name":"Proceedings on Privacy Enhancing Technologies. Privacy Enhancing Technologies Symposium","volume":"2021 1","pages":"369 - 388"},"PeriodicalIF":0.0,"publicationDate":"2021-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41389878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
Unlinkable Updatable Hiding Databases and Privacy-Preserving Loyalty Programs 不可链接的可更新隐藏数据库和隐私保护忠诚度计划
Aditya Damodaran, A. Rial
Abstract Loyalty programs allow vendors to profile buyers based on their purchase histories, which can reveal privacy sensitive information. Existing privacy-friendly loyalty programs force buyers to choose whether their purchases are linkable. Moreover, vendors receive more purchase data than required for the sake of profiling. We propose a privacy-preserving loyalty program where purchases are always unlinkable, yet a vendor can profile a buyer based on her purchase history, which remains hidden from the vendor. Our protocol is based on a new building block, an unlinkable updatable hiding database (HD), which we define and construct. HD allows the vendor to initialize and update databases stored by buyers that contain their purchase histories and their accumulated loyalty points. Updates are unlinkable and, at each update, the database is hidden from the vendor. Buyers can neither modify the database nor use old versions of it. Our construction for HD is practical for large databases.
忠诚度计划允许供应商根据买家的购买历史对其进行分析,这可能会泄露隐私敏感信息。现有的隐私友好型忠诚计划迫使买家选择他们的购买是否可链接。此外,供应商收到的购买数据比分析所需的要多。我们提出了一个保护隐私的忠诚计划,其中购买总是不可链接的,但供应商可以根据买家的购买历史对其进行分析,这对供应商来说是隐藏的。我们的协议是基于一个新的构建块,一个不可链接的可更新隐藏数据库(HD),我们定义和构建。HD允许供应商初始化和更新由买家存储的数据库,其中包含他们的购买历史和累积的忠诚度积分。更新是不可链接的,并且在每次更新时,数据库对供应商是隐藏的。买家既不能修改数据库,也不能使用旧版本。我们的HD构造对于大型数据库是实用的。
{"title":"Unlinkable Updatable Hiding Databases and Privacy-Preserving Loyalty Programs","authors":"Aditya Damodaran, A. Rial","doi":"10.2478/popets-2021-0039","DOIUrl":"https://doi.org/10.2478/popets-2021-0039","url":null,"abstract":"Abstract Loyalty programs allow vendors to profile buyers based on their purchase histories, which can reveal privacy sensitive information. Existing privacy-friendly loyalty programs force buyers to choose whether their purchases are linkable. Moreover, vendors receive more purchase data than required for the sake of profiling. We propose a privacy-preserving loyalty program where purchases are always unlinkable, yet a vendor can profile a buyer based on her purchase history, which remains hidden from the vendor. Our protocol is based on a new building block, an unlinkable updatable hiding database (HD), which we define and construct. HD allows the vendor to initialize and update databases stored by buyers that contain their purchase histories and their accumulated loyalty points. Updates are unlinkable and, at each update, the database is hidden from the vendor. Buyers can neither modify the database nor use old versions of it. Our construction for HD is practical for large databases.","PeriodicalId":74556,"journal":{"name":"Proceedings on Privacy Enhancing Technologies. Privacy Enhancing Technologies Symposium","volume":"2021 1","pages":"95 - 121"},"PeriodicalIF":0.0,"publicationDate":"2021-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41553314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
ML-CB: Machine Learning Canvas Block ML-CB:机器学习画布块
Nathan Reitinger, Michelle L. Mazurek
Abstract With the aim of increasing online privacy, we present a novel, machine-learning based approach to blocking one of the three main ways website visitors are tracked online—canvas fingerprinting. Because the act of canvas fingerprinting uses, at its core, a JavaScript program, and because many of these programs are reused across the web, we are able to fit several machine learning models around a semantic representation of a potentially offending program, achieving accurate and robust classifiers. Our supervised learning approach is trained on a dataset we created by scraping roughly half a million websites using a custom Google Chrome extension storing information related to the canvas. Classification leverages our key insight that the images drawn by canvas fingerprinting programs have a facially distinct appearance, allowing us to manually classify files based on the images drawn; we take this approach one step further and train our classifiers not on the malleable images themselves, but on the more-difficult-to-change, underlying source code generating the images. As a result, ML-CB allows for more accurate tracker blocking.
为了提高在线隐私,我们提出了一种新颖的、基于机器学习的方法来阻止网站访问者被跟踪的三种主要方式之一——画布指纹。因为画布指纹识别的核心是一个JavaScript程序,而且其中许多程序在网络上被重用,所以我们能够围绕一个潜在违规程序的语义表示来拟合几个机器学习模型,从而实现准确而稳健的分类器。我们的监督学习方法是在我们通过使用自定义谷歌Chrome扩展存储与画布相关的信息抓取大约50万个网站创建的数据集上进行训练的。分类利用了我们的关键洞察力,即画布指纹程序绘制的图像具有明显的面部外观,使我们能够根据绘制的图像手动分类文件;我们进一步采用这种方法,并不是在可塑图像本身上训练分类器,而是在生成图像的更难以更改的底层源代码上训练分类器。因此,ML-CB允许更准确的跟踪器阻塞。
{"title":"ML-CB: Machine Learning Canvas Block","authors":"Nathan Reitinger, Michelle L. Mazurek","doi":"10.2478/popets-2021-0056","DOIUrl":"https://doi.org/10.2478/popets-2021-0056","url":null,"abstract":"Abstract With the aim of increasing online privacy, we present a novel, machine-learning based approach to blocking one of the three main ways website visitors are tracked online—canvas fingerprinting. Because the act of canvas fingerprinting uses, at its core, a JavaScript program, and because many of these programs are reused across the web, we are able to fit several machine learning models around a semantic representation of a potentially offending program, achieving accurate and robust classifiers. Our supervised learning approach is trained on a dataset we created by scraping roughly half a million websites using a custom Google Chrome extension storing information related to the canvas. Classification leverages our key insight that the images drawn by canvas fingerprinting programs have a facially distinct appearance, allowing us to manually classify files based on the images drawn; we take this approach one step further and train our classifiers not on the malleable images themselves, but on the more-difficult-to-change, underlying source code generating the images. As a result, ML-CB allows for more accurate tracker blocking.","PeriodicalId":74556,"journal":{"name":"Proceedings on Privacy Enhancing Technologies. Privacy Enhancing Technologies Symposium","volume":"2021 1","pages":"453 - 473"},"PeriodicalIF":0.0,"publicationDate":"2021-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42995195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Defining Privacy: How Users Interpret Technical Terms in Privacy Policies 定义隐私:用户如何解释隐私政策中的技术条款
Jenny Tang, Hannah Shoemaker, Ada Lerner, Eleanor Birrell
Abstract Recent privacy regulations such as GDPR and CCPA have emphasized the need for transparent, understandable privacy policies. This work investigates the role technical terms play in policy transparency. We identify potentially misunderstood technical terms that appear in privacy policies through a survey of current privacy policies and a pilot user study. We then run a user study on Amazon Mechanical Turk to evaluate whether users can accurately define these technical terms, to identify commonly held misconceptions, and to investigate how the use of technical terms affects users’ comfort with privacy policies. We find that technical terms are broadly misunderstood and that particular misconceptions are common. We also find that the use of technical terms affects users’ comfort with various privacy policies and their reported likeliness to accept those policies. We conclude that current use of technical terms in privacy policies poses a challenge to policy transparency and user privacy, and that companies should take steps to mitigate this effect.
摘要最近的隐私法规,如GDPR和CCPA,强调了透明、可理解的隐私政策的必要性。这项工作调查了技术术语在政策透明度中的作用。我们通过对当前隐私政策的调查和试点用户研究,确定了隐私政策中出现的可能被误解的技术术语。然后,我们在Amazon Mechanical Turk上进行了一项用户研究,以评估用户是否能够准确定义这些技术术语,找出普遍存在的误解,并调查技术术语的使用如何影响用户对隐私政策的满意度。我们发现,技术术语被广泛误解,而且特定的误解很常见。我们还发现,技术术语的使用会影响用户对各种隐私政策的满意度,以及他们接受这些政策的可能性。我们的结论是,目前在隐私政策中使用技术术语对政策透明度和用户隐私构成了挑战,公司应该采取措施减轻这种影响。
{"title":"Defining Privacy: How Users Interpret Technical Terms in Privacy Policies","authors":"Jenny Tang, Hannah Shoemaker, Ada Lerner, Eleanor Birrell","doi":"10.2478/popets-2021-0038","DOIUrl":"https://doi.org/10.2478/popets-2021-0038","url":null,"abstract":"Abstract Recent privacy regulations such as GDPR and CCPA have emphasized the need for transparent, understandable privacy policies. This work investigates the role technical terms play in policy transparency. We identify potentially misunderstood technical terms that appear in privacy policies through a survey of current privacy policies and a pilot user study. We then run a user study on Amazon Mechanical Turk to evaluate whether users can accurately define these technical terms, to identify commonly held misconceptions, and to investigate how the use of technical terms affects users’ comfort with privacy policies. We find that technical terms are broadly misunderstood and that particular misconceptions are common. We also find that the use of technical terms affects users’ comfort with various privacy policies and their reported likeliness to accept those policies. We conclude that current use of technical terms in privacy policies poses a challenge to policy transparency and user privacy, and that companies should take steps to mitigate this effect.","PeriodicalId":74556,"journal":{"name":"Proceedings on Privacy Enhancing Technologies. Privacy Enhancing Technologies Symposium","volume":"2021 1","pages":"70 - 94"},"PeriodicalIF":0.0,"publicationDate":"2021-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42849258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Faster homomorphic comparison operations for BGV and BFV 更快的BGV和BFV同态比较操作
Ilia Iliashenko, Vincent Zucca
Abstract Fully homomorphic encryption (FHE) allows to compute any function on encrypted values. However, in practice, there is no universal FHE scheme that is effi-cient in all possible use cases. In this work, we show that FHE schemes suitable for arithmetic circuits (e.g. BGV or BFV) have a similar performance as FHE schemes for non-arithmetic circuits (TFHE) in basic comparison tasks such as less-than, maximum and minimum operations. Our implementation of the less-than function in the HElib library is up to 3 times faster than the prior work based on BGV/BFV. It allows to compare a pair of 64-bit integers in 11 milliseconds, sort 64 32-bit integers in 19 seconds and find the minimum of 64 32-bit integers in 9.5 seconds on an average laptop without multi-threading.
完全同态加密(FHE)允许在加密值上计算任何函数。然而,在实践中,没有一个通用的FHE方案在所有可能的用例中都是有效的。在这项工作中,我们证明了适用于算术电路(例如BGV或BFV)的FHE方案与适用于非算术电路(TFHE)的FHE方案在小于、最大和最小运算等基本比较任务中具有相似的性能。我们在HElib库中实现的小于函数比之前基于BGV/BFV的工作快3倍。它允许在11毫秒内比较一对64位整数,在19秒内对64位整数进行排序,并在没有多线程的普通笔记本电脑上在9.5秒内找到64位整数的最小值。
{"title":"Faster homomorphic comparison operations for BGV and BFV","authors":"Ilia Iliashenko, Vincent Zucca","doi":"10.2478/popets-2021-0046","DOIUrl":"https://doi.org/10.2478/popets-2021-0046","url":null,"abstract":"Abstract Fully homomorphic encryption (FHE) allows to compute any function on encrypted values. However, in practice, there is no universal FHE scheme that is effi-cient in all possible use cases. In this work, we show that FHE schemes suitable for arithmetic circuits (e.g. BGV or BFV) have a similar performance as FHE schemes for non-arithmetic circuits (TFHE) in basic comparison tasks such as less-than, maximum and minimum operations. Our implementation of the less-than function in the HElib library is up to 3 times faster than the prior work based on BGV/BFV. It allows to compare a pair of 64-bit integers in 11 milliseconds, sort 64 32-bit integers in 19 seconds and find the minimum of 64 32-bit integers in 9.5 seconds on an average laptop without multi-threading.","PeriodicalId":74556,"journal":{"name":"Proceedings on Privacy Enhancing Technologies. Privacy Enhancing Technologies Symposium","volume":"2021 1","pages":"246 - 264"},"PeriodicalIF":0.0,"publicationDate":"2021-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48028323","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 50
Awareness, Adoption, and Misconceptions of Web Privacy Tools 对网络隐私工具的认识、采用和误解
Peter Story, Daniel Smullen, Yaxing Yao, A. Acquisti, L. Cranor, N. Sadeh, F. Schaub
Abstract Privacy and security tools can help users protect themselves online. Unfortunately, people are often unaware of such tools, and have potentially harmful misconceptions about the protections provided by the tools they know about. Effectively encouraging the adoption of privacy tools requires insights into people’s tool awareness and understanding. Towards that end, we conducted a demographically-stratified survey of 500 US participants to measure their use of and perceptions about five web browsing-related tools: private browsing, VPNs, Tor Browser, ad blockers, and antivirus software. We asked about participants’ perceptions of the protections provided by these tools across twelve realistic scenarios. Our thematic analysis of participants’ responses revealed diverse forms of misconceptions. Some types of misconceptions were common across tools and scenarios, while others were associated with particular combinations of tools and scenarios. For example, some participants suggested that the privacy protections offered by private browsing, VPNs, and Tor Browser would also protect them from security threats – a misconception that might expose them to preventable risks. We anticipate that our findings will help researchers, tool designers, and privacy advocates educate the public about privacy- and security-enhancing technologies.
摘要隐私和安全工具可以帮助用户在线保护自己。不幸的是,人们往往不知道这些工具,并且对他们所知道的工具所提供的保护存在潜在的有害误解。有效鼓励采用隐私工具需要深入了解人们对工具的认识和理解。为此,我们对500名美国参与者进行了一项人口学分层调查,以衡量他们对五种网络浏览相关工具的使用和看法:私人浏览、VPN、Tor浏览器、广告拦截和防病毒软件。我们询问了参与者在12个现实场景中对这些工具提供的保护的看法。我们对参与者回答的主题分析揭示了各种形式的误解。某些类型的误解在工具和场景中很常见,而另一些则与工具和场景的特定组合有关。例如,一些参与者建议,私人浏览、VPN和Tor浏览器提供的隐私保护也可以保护他们免受安全威胁——这种误解可能会使他们面临可预防的风险。我们预计,我们的发现将有助于研究人员、工具设计师和隐私倡导者向公众宣传增强隐私和安全的技术。
{"title":"Awareness, Adoption, and Misconceptions of Web Privacy Tools","authors":"Peter Story, Daniel Smullen, Yaxing Yao, A. Acquisti, L. Cranor, N. Sadeh, F. Schaub","doi":"10.2478/popets-2021-0049","DOIUrl":"https://doi.org/10.2478/popets-2021-0049","url":null,"abstract":"Abstract Privacy and security tools can help users protect themselves online. Unfortunately, people are often unaware of such tools, and have potentially harmful misconceptions about the protections provided by the tools they know about. Effectively encouraging the adoption of privacy tools requires insights into people’s tool awareness and understanding. Towards that end, we conducted a demographically-stratified survey of 500 US participants to measure their use of and perceptions about five web browsing-related tools: private browsing, VPNs, Tor Browser, ad blockers, and antivirus software. We asked about participants’ perceptions of the protections provided by these tools across twelve realistic scenarios. Our thematic analysis of participants’ responses revealed diverse forms of misconceptions. Some types of misconceptions were common across tools and scenarios, while others were associated with particular combinations of tools and scenarios. For example, some participants suggested that the privacy protections offered by private browsing, VPNs, and Tor Browser would also protect them from security threats – a misconception that might expose them to preventable risks. We anticipate that our findings will help researchers, tool designers, and privacy advocates educate the public about privacy- and security-enhancing technologies.","PeriodicalId":74556,"journal":{"name":"Proceedings on Privacy Enhancing Technologies. Privacy Enhancing Technologies Symposium","volume":"2021 1","pages":"308 - 333"},"PeriodicalIF":0.0,"publicationDate":"2021-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42662689","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 30
期刊
Proceedings on Privacy Enhancing Technologies. Privacy Enhancing Technologies Symposium
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1