首页 > 最新文献

Proceedings on Privacy Enhancing Technologies. Privacy Enhancing Technologies Symposium最新文献

英文 中文
I-GWAS: Privacy-Preserving Interdependent Genome-Wide Association Studies I-GWAS:隐私保护相互依赖的全基因组关联研究
Túlio Pascoal, Jérémie Decouchant, Antoine Boutet, Marcus Völp
Genome-wide Association Studies (GWASes) identify genomic variations that are statistically associated with a trait, such as a disease, in a group of individuals. Unfortunately, careless sharing of GWAS statistics might give rise to privacy attacks. Several works attempted to reconcile secure processing with privacy-preserving releases of GWASes. However, we highlight that these approaches remain vulnerable if GWASes utilize overlapping sets of individuals and genomic variations. In such conditions, we show that even when relying on state-of-the-art techniques for protecting releases, an adversary could reconstruct the genomic variations of up to 28.6% of participants, and that the released statistics of up to 92.3% of the genomic variations would enable membership inference attacks. We introduce I-GWAS, a novel framework that securely computes and releases the results of multiple possibly interdependent GWASes. I-GWAS continuously releases privacy-preserving and noise-free GWAS results as new genomes become available.
全基因组关联研究(gwas)在一组个体中确定与某种特征(如疾病)在统计学上相关的基因组变异。不幸的是,不小心分享GWAS统计数据可能会引起隐私攻击。一些工作试图将安全处理与gase的隐私保护版本协调起来。然而,我们强调,如果gase利用重叠的个体和基因组变异集,这些方法仍然是脆弱的。在这种情况下,我们表明,即使依靠最先进的技术来保护释放,攻击者也可以重建高达28.6%的参与者的基因组变异,并且高达92.3%的基因组变异的已发布统计数据将使成员推理攻击成为可能。我们介绍了I-GWAS,这是一个新的框架,可以安全地计算和发布多个可能相互依赖的gwas的结果。随着新基因组的出现,I-GWAS不断发布隐私保护和无噪声的GWAS结果。
{"title":"I-GWAS: Privacy-Preserving Interdependent Genome-Wide Association Studies","authors":"Túlio Pascoal, Jérémie Decouchant, Antoine Boutet, Marcus Völp","doi":"10.56553/popets-2023-0026","DOIUrl":"https://doi.org/10.56553/popets-2023-0026","url":null,"abstract":"Genome-wide Association Studies (GWASes) identify genomic variations that are statistically associated with a trait, such as a disease, in a group of individuals. Unfortunately, careless sharing of GWAS statistics might give rise to privacy attacks. Several works attempted to reconcile secure processing with privacy-preserving releases of GWASes. However, we highlight that these approaches remain vulnerable if GWASes utilize overlapping sets of individuals and genomic variations. In such conditions, we show that even when relying on state-of-the-art techniques for protecting releases, an adversary could reconstruct the genomic variations of up to 28.6% of participants, and that the released statistics of up to 92.3% of the genomic variations would enable membership inference attacks. We introduce I-GWAS, a novel framework that securely computes and releases the results of multiple possibly interdependent GWASes. I-GWAS continuously releases privacy-preserving and noise-free GWAS results as new genomes become available.","PeriodicalId":74556,"journal":{"name":"Proceedings on Privacy Enhancing Technologies. Privacy Enhancing Technologies Symposium","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135237088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Privacy-Aware Adversarial Network in Human Mobility Prediction 隐私感知对抗网络在人类移动预测中的应用
Yuting Zhan, Hamed Haddadi, Afra Mashhadi
As mobile devices and location-based services are increasingly developed in different smart city scenarios and applications, many unexpected privacy leakages have arisen due to geolocated data collection and sharing. User re-identification and other sensitive inferences are major privacy threats when geolocated data are shared with cloud-assisted applications. Significantly, four spatio-temporal points are enough to uniquely identify 95% of the individuals, which exacerbates personal information leakages. To tackle malicious purposes such as user re-identification, we propose an LSTM-based adversarial mechanism with representation learning to attain a privacy-preserving feature representation of the original geolocated data (i.e., mobility data) for a sharing purpose. These representations aim to maximally reduce the chance of user re-identification and full data reconstruction with a minimal utility budget (i.e., loss). We train the mechanism by quantifying privacy-utility trade-off of mobility datasets in terms of trajectory reconstruction risk, user re-identification risk, and mobility predictability. We report an exploratory analysis that enables the user to assess this trade-off with a specific loss function and its weight parameters. The extensive comparison results on four representative mobility datasets demonstrate the superiority of our proposed architecture in mobility privacy protection and the efficiency of the proposed privacy-preserving features extractor. We show that the privacy of mobility traces attains decent protection at the cost of marginal mobility utility. Our results also show that by exploring the Pareto optimal setting, we can simultaneously increase both privacy (45%) and utility (32%).
随着移动设备和基于位置的服务在不同智慧城市场景和应用中的日益发展,由于地理位置数据的收集和共享,产生了许多意想不到的隐私泄露。当与云辅助应用程序共享地理定位数据时,用户重新识别和其他敏感推断是主要的隐私威胁。值得注意的是,四个时空点足以唯一识别95%的个体,这加剧了个人信息的泄露。为了解决用户重新识别等恶意目的,我们提出了一种基于lstm的对抗机制,并结合表示学习来获得原始地理位置数据(即移动数据)的隐私保护特征表示,以实现共享目的。这些表示旨在以最小的效用预算(即损失)最大限度地减少用户重新识别和完整数据重建的机会。我们通过在轨迹重建风险、用户重新识别风险和移动可预测性方面量化移动数据集的隐私-效用权衡来训练机制。我们报告了一项探索性分析,使用户能够通过特定的损失函数及其权重参数评估这种权衡。在四个代表性的移动数据集上进行了广泛的比较,结果表明了我们提出的架构在移动隐私保护方面的优越性以及所提出的隐私保护特征提取器的有效性。我们表明,以边际移动效用为代价,移动轨迹的隐私得到了良好的保护。我们的结果还表明,通过探索帕累托最优设置,我们可以同时增加隐私(45%)和效用(32%)。
{"title":"Privacy-Aware Adversarial Network in Human Mobility Prediction","authors":"Yuting Zhan, Hamed Haddadi, Afra Mashhadi","doi":"10.56553/popets-2023-0032","DOIUrl":"https://doi.org/10.56553/popets-2023-0032","url":null,"abstract":"As mobile devices and location-based services are increasingly developed in different smart city scenarios and applications, many unexpected privacy leakages have arisen due to geolocated data collection and sharing. User re-identification and other sensitive inferences are major privacy threats when geolocated data are shared with cloud-assisted applications. Significantly, four spatio-temporal points are enough to uniquely identify 95% of the individuals, which exacerbates personal information leakages. To tackle malicious purposes such as user re-identification, we propose an LSTM-based adversarial mechanism with representation learning to attain a privacy-preserving feature representation of the original geolocated data (i.e., mobility data) for a sharing purpose. These representations aim to maximally reduce the chance of user re-identification and full data reconstruction with a minimal utility budget (i.e., loss). We train the mechanism by quantifying privacy-utility trade-off of mobility datasets in terms of trajectory reconstruction risk, user re-identification risk, and mobility predictability. We report an exploratory analysis that enables the user to assess this trade-off with a specific loss function and its weight parameters. The extensive comparison results on four representative mobility datasets demonstrate the superiority of our proposed architecture in mobility privacy protection and the efficiency of the proposed privacy-preserving features extractor. We show that the privacy of mobility traces attains decent protection at the cost of marginal mobility utility. Our results also show that by exploring the Pareto optimal setting, we can simultaneously increase both privacy (45%) and utility (32%).","PeriodicalId":74556,"journal":{"name":"Proceedings on Privacy Enhancing Technologies. Privacy Enhancing Technologies Symposium","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135420123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
How Much Privacy Does Federated Learning with Secure Aggregation Guarantee? 安全聚合的联邦学习保证了多少隐私?
Ahmed Roushdy Elkordy, Jiang Zhang, Yahya H. Ezzeldin, Konstantinos Psounis, Salman Avestimehr
Federated learning (FL) has attracted growing interest for enabling privacy-preserving machine learning on data stored at multiple users while avoiding moving the data off-device. However, while data never leaves users’ devices, privacy still cannot be guaranteed since significant computations on users’ training data are shared in the form of trained local models. These local models have recently been shown to pose a substantial privacy threat through different privacy attacks such as model inversion attacks. As a remedy, Secure Aggregation (SA) has been developed as a framework to preserve privacy in FL, by guaranteeing the server can only learn the global aggregated model update but not the individual model updates.While SA ensures no additional information is leaked about the individual model update beyond the aggregated model update, there are no formal guarantees on how much privacy FL with SA can actually offer; as information about the individual dataset can still potentially leak through the aggregated model computed at the server. In this work, we perform a first analysis of the formal privacy guarantees for FL with SA. Specifically, we use Mutual Information (MI) as a quantification metric and derive upper bounds on how much information about each user's dataset can leak through the aggregated model update. When using the FedSGD aggregation algorithm, our theoretical bounds show that the amount of privacy leakage reduces linearly with the number of users participating in FL with SA. To validate our theoretical bounds, we use an MI Neural Estimator to empirically evaluate the privacy leakage under different FL setups on both the MNIST and CIFAR10 datasets. Our experiments verify our theoretical bounds for FedSGD, which show a reduction in privacy leakage as the number of users and local batch size grow, and an increase in privacy leakage as the number of training rounds increases. We also observe similar dependencies for the FedAvg and FedProx protocol.
联邦学习(FL)吸引了越来越多的兴趣,因为它可以在存储在多个用户处的数据上实现保护隐私的机器学习,同时避免将数据移出设备。然而,虽然数据从未离开用户的设备,但由于对用户训练数据的大量计算以训练后的局部模型的形式共享,因此隐私仍然无法得到保证。这些局部模型最近被证明通过不同的隐私攻击(如模型反转攻击)对隐私构成严重威胁。作为补救措施,安全聚合(SA)已被开发为一种框架,通过保证服务器只能学习全局聚合模型更新而不能学习单个模型更新,从而保护FL中的隐私。虽然SA确保除了聚合模型更新之外,关于单个模型更新的其他信息不会泄露,但对于带有SA的FL实际上可以提供多少隐私,并没有正式的保证;因为关于单个数据集的信息仍然可能通过在服务器上计算的聚合模型泄露。在这项工作中,我们对带有SA的FL的正式隐私保证进行了首次分析。具体来说,我们使用互信息(MI)作为量化度量,并推导出每个用户数据集可以通过聚合模型更新泄露多少信息的上限。当使用FedSGD聚合算法时,我们的理论边界表明,隐私泄漏量随着使用SA参与FL的用户数量线性减少。为了验证我们的理论界限,我们使用MI神经估计器对MNIST和CIFAR10数据集上不同FL设置下的隐私泄漏进行了经验评估。我们的实验验证了FedSGD的理论边界,它表明隐私泄漏随着用户数量和本地批大小的增加而减少,隐私泄漏随着训练轮次的增加而增加。我们还观察到fedag和FedProx协议的类似依赖关系。
{"title":"How Much Privacy Does Federated Learning with Secure Aggregation Guarantee?","authors":"Ahmed Roushdy Elkordy, Jiang Zhang, Yahya H. Ezzeldin, Konstantinos Psounis, Salman Avestimehr","doi":"10.56553/popets-2023-0030","DOIUrl":"https://doi.org/10.56553/popets-2023-0030","url":null,"abstract":"Federated learning (FL) has attracted growing interest for enabling privacy-preserving machine learning on data stored at multiple users while avoiding moving the data off-device. However, while data never leaves users’ devices, privacy still cannot be guaranteed since significant computations on users’ training data are shared in the form of trained local models. These local models have recently been shown to pose a substantial privacy threat through different privacy attacks such as model inversion attacks. As a remedy, Secure Aggregation (SA) has been developed as a framework to preserve privacy in FL, by guaranteeing the server can only learn the global aggregated model update but not the individual model updates.While SA ensures no additional information is leaked about the individual model update beyond the aggregated model update, there are no formal guarantees on how much privacy FL with SA can actually offer; as information about the individual dataset can still potentially leak through the aggregated model computed at the server. In this work, we perform a first analysis of the formal privacy guarantees for FL with SA. Specifically, we use Mutual Information (MI) as a quantification metric and derive upper bounds on how much information about each user's dataset can leak through the aggregated model update. When using the FedSGD aggregation algorithm, our theoretical bounds show that the amount of privacy leakage reduces linearly with the number of users participating in FL with SA. To validate our theoretical bounds, we use an MI Neural Estimator to empirically evaluate the privacy leakage under different FL setups on both the MNIST and CIFAR10 datasets. Our experiments verify our theoretical bounds for FedSGD, which show a reduction in privacy leakage as the number of users and local batch size grow, and an increase in privacy leakage as the number of training rounds increases. We also observe similar dependencies for the FedAvg and FedProx protocol.","PeriodicalId":74556,"journal":{"name":"Proceedings on Privacy Enhancing Technologies. Privacy Enhancing Technologies Symposium","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136258713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Checking Websites’ GDPR Consent Compliance for Marketing Emails 检查网站对营销邮件的GDPR同意合规性
Karel Kubicek, Jakob Merane, C. C. Jiménez, A. Stremitzer, S. Bechtold, D. Basin
Abstract The sending of marketing emails is regulated to protect users from unsolicited emails. For instance, the European Union’s ePrivacy Directive states that marketers must obtain users’ prior consent, and the General Data Protection Regulation (GDPR) specifies further that such consent must be freely given, specific, informed, and unambiguous. Based on these requirements, we design a labeling of legal characteristics for websites and emails. This leads to a simple decision procedure that detects potential legal violations. Using our procedure, we evaluated 1000 websites and the 5000 emails resulting from registering to these websites. Both datasets and evaluations are available upon request. We find that 21.9% of the websites contain potential violations of privacy and unfair competition rules, either in the registration process (17.3%) or email communication (17.7%). We demonstrate with a statistical analysis the possibility of automatically detecting such potential violations.
摘要为了防止用户收到不请自来的电子邮件,对营销邮件的发送进行了监管。例如,欧盟的电子隐私指令规定,营销人员必须事先获得用户的同意,通用数据保护条例(GDPR)进一步规定,这种同意必须是自由的、具体的、知情的和明确的。基于这些要求,我们设计了一个网站和电子邮件的法律特征标签。这导致了一个简单的决策过程,可以检测潜在的违法行为。使用我们的程序,我们评估了1000个网站和注册这些网站所产生的5000封电子邮件。数据集和评估均可应要求提供。我们发现21.9%的网站存在潜在的违反隐私和不正当竞争规则的行为,无论是在注册过程中(17.3%)还是在电子邮件通信中(17.7%)。我们用统计分析证明了自动检测此类潜在违规行为的可能性。
{"title":"Checking Websites’ GDPR Consent Compliance for Marketing Emails","authors":"Karel Kubicek, Jakob Merane, C. C. Jiménez, A. Stremitzer, S. Bechtold, D. Basin","doi":"10.2478/popets-2022-0046","DOIUrl":"https://doi.org/10.2478/popets-2022-0046","url":null,"abstract":"Abstract The sending of marketing emails is regulated to protect users from unsolicited emails. For instance, the European Union’s ePrivacy Directive states that marketers must obtain users’ prior consent, and the General Data Protection Regulation (GDPR) specifies further that such consent must be freely given, specific, informed, and unambiguous. Based on these requirements, we design a labeling of legal characteristics for websites and emails. This leads to a simple decision procedure that detects potential legal violations. Using our procedure, we evaluated 1000 websites and the 5000 emails resulting from registering to these websites. Both datasets and evaluations are available upon request. We find that 21.9% of the websites contain potential violations of privacy and unfair competition rules, either in the registration process (17.3%) or email communication (17.7%). We demonstrate with a statistical analysis the possibility of automatically detecting such potential violations.","PeriodicalId":74556,"journal":{"name":"Proceedings on Privacy Enhancing Technologies. Privacy Enhancing Technologies Symposium","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43602394","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Employees’ privacy perceptions: exploring the dimensionality and antecedents of personal data sensitivity and willingness to disclose 员工的隐私感知:探索个人数据敏感性和披露意愿的维度和前因
Jan Tolsdorf, D. Reinhardt, Luigi Lo Iacono
Abstract The processing of employees’ personal data is dramatically increasing, yet there is a lack of tools that allow employees to manage their privacy. In order to develop these tools, one needs to understand what sensitive personal data are and what factors influence employees’ willingness to disclose. Current privacy research, however, lacks such insights, as it has focused on other contexts in recent decades. To fill this research gap, we conducted a cross-sectional survey with 553 employees from Germany. Our survey provides multiple insights into the relationships between perceived data sensitivity and willingness to disclose in the employment context. Among other things, we show that the perceived sensitivity of certain types of data differs substantially from existing studies in other contexts. Moreover, currently used legal and contextual distinctions between different types of data do not accurately reflect the subtleties of employees’ perceptions. Instead, using 62 different data elements, we identified four groups of personal data that better reflect the multi-dimensionality of perceptions. However, previously found common disclosure antecedents in the context of online privacy do not seem to affect them. We further identified three groups of employees that differ in their perceived data sensitivity and willingness to disclose, but neither in their privacy beliefs nor in their demographics. Our findings thus provide employers, policy makers, and researchers with a better understanding of employees’ privacy perceptions and serve as a basis for future targeted research on specific types of personal data and employees.
摘要对员工个人数据的处理正在急剧增加,但缺乏允许员工管理其隐私的工具。为了开发这些工具,需要了解什么是敏感的个人数据,以及哪些因素影响员工的披露意愿。然而,目前的隐私研究缺乏这样的见解,因为近几十年来它一直关注其他背景。为了填补这一研究空白,我们对553名来自德国的员工进行了一项横断面调查。我们的调查为感知数据敏感性和在就业环境中披露意愿之间的关系提供了多种见解。除其他外,我们发现某些类型数据的感知敏感性与其他背景下的现有研究有很大不同。此外,目前使用的不同类型数据之间的法律和上下文区别并不能准确反映员工感知的微妙之处。相反,我们使用62个不同的数据元素,确定了四组更好地反映感知多维性的个人数据。然而,先前发现的网络隐私背景下的常见披露前因似乎并不影响它们。我们进一步确定了三组员工,他们在感知数据敏感性和披露意愿方面存在差异,但在隐私信念和人口统计方面都没有差异。因此,我们的研究结果为雇主、政策制定者和研究人员更好地了解员工的隐私感知,并为未来对特定类型的个人数据和员工进行有针对性的研究奠定基础。
{"title":"Employees’ privacy perceptions: exploring the dimensionality and antecedents of personal data sensitivity and willingness to disclose","authors":"Jan Tolsdorf, D. Reinhardt, Luigi Lo Iacono","doi":"10.2478/popets-2022-0036","DOIUrl":"https://doi.org/10.2478/popets-2022-0036","url":null,"abstract":"Abstract The processing of employees’ personal data is dramatically increasing, yet there is a lack of tools that allow employees to manage their privacy. In order to develop these tools, one needs to understand what sensitive personal data are and what factors influence employees’ willingness to disclose. Current privacy research, however, lacks such insights, as it has focused on other contexts in recent decades. To fill this research gap, we conducted a cross-sectional survey with 553 employees from Germany. Our survey provides multiple insights into the relationships between perceived data sensitivity and willingness to disclose in the employment context. Among other things, we show that the perceived sensitivity of certain types of data differs substantially from existing studies in other contexts. Moreover, currently used legal and contextual distinctions between different types of data do not accurately reflect the subtleties of employees’ perceptions. Instead, using 62 different data elements, we identified four groups of personal data that better reflect the multi-dimensionality of perceptions. However, previously found common disclosure antecedents in the context of online privacy do not seem to affect them. We further identified three groups of employees that differ in their perceived data sensitivity and willingness to disclose, but neither in their privacy beliefs nor in their demographics. Our findings thus provide employers, policy makers, and researchers with a better understanding of employees’ privacy perceptions and serve as a basis for future targeted research on specific types of personal data and employees.","PeriodicalId":74556,"journal":{"name":"Proceedings on Privacy Enhancing Technologies. Privacy Enhancing Technologies Symposium","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43118224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Understanding Utility and Privacy of Demographic Data in Education Technology by Causal Analysis and Adversarial-Censoring 通过因果分析和对抗性审查了解教育技术中人口统计数据的效用和隐私性
Rakibul Hasan, Mario Fritz
Abstract Education technologies (EdTech) are becoming pervasive due to their cost-effectiveness, accessibility, and scalability. They also experienced accelerated market growth during the recent pandemic. EdTech collects massive amounts of students’ behavioral and (sensitive) demographic data, often justified by the potential to help students by personalizing education. Researchers voiced concerns regarding privacy and data abuses (e.g., targeted advertising) in the absence of clearly defined data collection and sharing policies. However, technical contributions to alleviating students’ privacy risks have been scarce. In this paper, we argue against collecting demographic data by showing that gender—a widely used demographic feature—does not causally affect students’ course performance: arguably the most popular target of predictive models. Then, we show that gender can be inferred from behavioral data; thus, simply leaving them out does not protect students’ privacy. Combining a feature selection mechanism with an adversarial censoring technique, we propose a novel approach to create a ‘private’ version of a dataset comprising of fewer features that predict the target without revealing the gender, and are interpretive. We conduct comprehensive experiments on a public dataset to demonstrate the robustness and generalizability of our mechanism.
教育技术(EdTech)由于其成本效益、可访问性和可扩展性而变得越来越普遍。在最近的大流行期间,它们的市场也加速增长。EdTech收集了大量学生的行为和(敏感的)人口统计数据,这些数据通常被认为有可能通过个性化教育来帮助学生。研究人员对缺乏明确定义的数据收集和共享政策的情况下隐私和数据滥用(例如,定向广告)表示担忧。然而,减轻学生隐私风险的技术贡献却很少。在本文中,我们反对收集人口统计数据,通过显示性别-一个广泛使用的人口统计特征-不会对学生的课程表现产生因果关系:可以说是预测模型中最受欢迎的目标。然后,我们证明了性别可以从行为数据推断出来;因此,简单地将它们排除在外并不能保护学生的隐私。将特征选择机制与对抗性审查技术相结合,我们提出了一种新的方法来创建包含较少特征的数据集的“私有”版本,这些特征可以预测目标而不透露性别,并且是解释性的。我们在公共数据集上进行了全面的实验,以证明我们的机制的鲁棒性和泛化性。
{"title":"Understanding Utility and Privacy of Demographic Data in Education Technology by Causal Analysis and Adversarial-Censoring","authors":"Rakibul Hasan, Mario Fritz","doi":"10.2478/popets-2022-0044","DOIUrl":"https://doi.org/10.2478/popets-2022-0044","url":null,"abstract":"Abstract Education technologies (EdTech) are becoming pervasive due to their cost-effectiveness, accessibility, and scalability. They also experienced accelerated market growth during the recent pandemic. EdTech collects massive amounts of students’ behavioral and (sensitive) demographic data, often justified by the potential to help students by personalizing education. Researchers voiced concerns regarding privacy and data abuses (e.g., targeted advertising) in the absence of clearly defined data collection and sharing policies. However, technical contributions to alleviating students’ privacy risks have been scarce. In this paper, we argue against collecting demographic data by showing that gender—a widely used demographic feature—does not causally affect students’ course performance: arguably the most popular target of predictive models. Then, we show that gender can be inferred from behavioral data; thus, simply leaving them out does not protect students’ privacy. Combining a feature selection mechanism with an adversarial censoring technique, we propose a novel approach to create a ‘private’ version of a dataset comprising of fewer features that predict the target without revealing the gender, and are interpretive. We conduct comprehensive experiments on a public dataset to demonstrate the robustness and generalizability of our mechanism.","PeriodicalId":74556,"journal":{"name":"Proceedings on Privacy Enhancing Technologies. Privacy Enhancing Technologies Symposium","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44121355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Privacy-Preserving Positioning in Wi-Fi Fine Timing Measurement Wi-Fi精细定时测量中的隐私保护定位
Domien Schepers, Aanjhan Ranganathan
Abstract With the standardization of Wi-Fi Fine Timing Measurement (Wi-Fi FTM; IEEE 802.11mc), the IEEE introduced indoor positioning for Wi-Fi networks. To date, Wi-Fi FTM is the most widely supported Wi-Fi distance measurement and positioning system. In this paper, we perform the first privacy analysis of Wi-Fi FTM and evaluate devices from a wide variety of vendors. We find the protocol inherently leaks location-sensitive information. Most notably, we present techniques that allow any client to be localized and tracked by a solely passive adversary. We identify flaws inWi-Fi FTM MAC address randomization and present techniques to fingerprint stations with firmware-specific granularity further leaking client identity. We address these shortcomings and present a privacy-preserving passive positioning system that leverages existing Wi-Fi FTM infrastructure and requires no hardware changes. Due to the absence of any client-side transmission, our design hides the very existence of a client and as a side-effect improves overall scalability without compromising on accuracy. Finally, we present privacy-enhancing recommendations for the current and next-generation protocols such as Wi-Fi Next Generation Positioning (Wi-Fi NGP; IEEE 802.11az).
摘要随着Wi-Fi精细定时测量(Wi-Fi FTM;IEEE 802.11mc)的标准化,IEEE引入了Wi-Fi网络的室内定位。到目前为止,Wi-Fi FTM是支持最广泛的Wi-Fi距离测量和定位系统。在本文中,我们对Wi-Fi FTM进行了首次隐私分析,并评估了来自各种供应商的设备。我们发现该协议本质上会泄露位置敏感信息。最值得注意的是,我们提出的技术允许任何客户端被定位并被一个被动的对手跟踪。我们发现了Wi-Fi FTM MAC地址随机化中的缺陷,并提出了具有固件特定粒度的指纹站技术,进一步泄露了客户端身份。我们解决了这些缺点,并提出了一种保护隐私的被动定位系统,该系统利用了现有的Wi-Fi FTM基础设施,不需要更改硬件。由于没有任何客户端传输,我们的设计隐藏了客户端的存在,并且作为副作用,在不影响准确性的情况下提高了整体可扩展性。最后,我们提出了当前和下一代协议的隐私增强建议,如Wi-Fi下一代定位(Wi-Fi NGP;IEEE 802.11az)。
{"title":"Privacy-Preserving Positioning in Wi-Fi Fine Timing Measurement","authors":"Domien Schepers, Aanjhan Ranganathan","doi":"10.2478/popets-2022-0048","DOIUrl":"https://doi.org/10.2478/popets-2022-0048","url":null,"abstract":"Abstract With the standardization of Wi-Fi Fine Timing Measurement (Wi-Fi FTM; IEEE 802.11mc), the IEEE introduced indoor positioning for Wi-Fi networks. To date, Wi-Fi FTM is the most widely supported Wi-Fi distance measurement and positioning system. In this paper, we perform the first privacy analysis of Wi-Fi FTM and evaluate devices from a wide variety of vendors. We find the protocol inherently leaks location-sensitive information. Most notably, we present techniques that allow any client to be localized and tracked by a solely passive adversary. We identify flaws inWi-Fi FTM MAC address randomization and present techniques to fingerprint stations with firmware-specific granularity further leaking client identity. We address these shortcomings and present a privacy-preserving passive positioning system that leverages existing Wi-Fi FTM infrastructure and requires no hardware changes. Due to the absence of any client-side transmission, our design hides the very existence of a client and as a side-effect improves overall scalability without compromising on accuracy. Finally, we present privacy-enhancing recommendations for the current and next-generation protocols such as Wi-Fi Next Generation Positioning (Wi-Fi NGP; IEEE 802.11az).","PeriodicalId":74556,"journal":{"name":"Proceedings on Privacy Enhancing Technologies. Privacy Enhancing Technologies Symposium","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41611085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
PUBA: Privacy-Preserving User-Data Bookkeeping and Analytics PUBA:隐私保护用户数据簿记和分析
Valerie Fetzer, Marcel Keller, Sven Maier, Markus Raiber, Andy Rupp, Rebecca Schwerdt
Abstract In this paper we propose Privacy-preserving User-data Bookkeeping & Analytics (PUBA), a building block destined to enable the implementation of business models (e.g., targeted advertising) and regulations (e.g., fraud detection) requiring user-data analysis in a privacy-preserving way. In PUBA, users keep an unlinkable but authenticated cryptographic logbook containing their historic data on their device. This logbook can only be updated by the operator while its content is not revealed. Users can take part in a privacy-preserving analytics computation, where it is ensured that their logbook is up-to-date and authentic while the potentially secret analytics function is verified to be privacy-friendly. Taking constrained devices into account, users may also outsource analytic computations (to a potentially malicious proxy not colluding with the operator).We model our novel building block in the Universal Composability framework and provide a practical protocol instantiation. To demonstrate the flexibility of PUBA, we sketch instantiations of privacy-preserving fraud detection and targeted advertising, although it could be used in many more scenarios, e.g. data analytics for multi-modal transportation systems. We implemented our bookkeeping protocols and an exemplary outsourced analytics computation based on logistic regression using the MP-SPDZ MPC framework. Performance evaluations using a smartphone as user device and more powerful hardware for operator and proxy suggest that PUBA for smaller logbooks can indeed be practical.
在本文中,我们提出了保护隐私的用户数据簿记与分析(PUBA),这是一个构建块,旨在实现需要以保护隐私的方式进行用户数据分析的商业模型(例如,定向广告)和法规(例如,欺诈检测)。在《绝地求生》中,用户在他们的设备上保留了一个不可链接但经过认证的加密日志,其中包含他们的历史数据。该日志只能由操作员更新,但不显示其内容。用户可以参与保护隐私的分析计算,确保他们的日志是最新的和真实的,而潜在的秘密分析功能被验证为隐私友好。考虑到受约束的设备,用户还可以将分析计算外包(给没有与运营商串通的潜在恶意代理)。我们在通用可组合性框架中为我们的新构建块建模,并提供了一个实用的协议实例化。为了展示PUBA的灵活性,我们概述了隐私保护欺诈检测和定向广告的实例,尽管它可以用于更多场景,例如多式联运系统的数据分析。我们使用MP-SPDZ MPC框架实现了我们的簿记协议和一个基于逻辑回归的示例外包分析计算。使用智能手机作为用户设备和更强大的硬件作为操作员和代理的性能评估表明,PUBA用于更小的日志确实是可行的。
{"title":"PUBA: Privacy-Preserving User-Data Bookkeeping and Analytics","authors":"Valerie Fetzer, Marcel Keller, Sven Maier, Markus Raiber, Andy Rupp, Rebecca Schwerdt","doi":"10.2478/popets-2022-0054","DOIUrl":"https://doi.org/10.2478/popets-2022-0054","url":null,"abstract":"Abstract In this paper we propose Privacy-preserving User-data Bookkeeping & Analytics (PUBA), a building block destined to enable the implementation of business models (e.g., targeted advertising) and regulations (e.g., fraud detection) requiring user-data analysis in a privacy-preserving way. In PUBA, users keep an unlinkable but authenticated cryptographic logbook containing their historic data on their device. This logbook can only be updated by the operator while its content is not revealed. Users can take part in a privacy-preserving analytics computation, where it is ensured that their logbook is up-to-date and authentic while the potentially secret analytics function is verified to be privacy-friendly. Taking constrained devices into account, users may also outsource analytic computations (to a potentially malicious proxy not colluding with the operator).We model our novel building block in the Universal Composability framework and provide a practical protocol instantiation. To demonstrate the flexibility of PUBA, we sketch instantiations of privacy-preserving fraud detection and targeted advertising, although it could be used in many more scenarios, e.g. data analytics for multi-modal transportation systems. We implemented our bookkeeping protocols and an exemplary outsourced analytics computation based on logistic regression using the MP-SPDZ MPC framework. Performance evaluations using a smartphone as user device and more powerful hardware for operator and proxy suggest that PUBA for smaller logbooks can indeed be practical.","PeriodicalId":74556,"journal":{"name":"Proceedings on Privacy Enhancing Technologies. Privacy Enhancing Technologies Symposium","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43610131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
CoverDrop: Blowing the Whistle Through A News App CoverDrop:通过新闻应用程序吹口哨
Mansoor Ahmed-Rengers, Diana A. Vasile, Daniel Hugenroth, A. Beresford, Ross Anderson
Abstract Whistleblowing is hazardous in a world of pervasive surveillance, yet many leading newspapers expect sources to contact them with methods that are either insecure or barely usable. In an attempt to do better, we conducted two workshops with British news organisations and surveyed whistleblowing options and guidelines at major media outlets. We concluded that the soft spot is a system for initial contact and trust establishment between sources and reporters. CoverDrop is a two-way, secure system to do this. We support secure messaging within a news app, so that all its other users provide cover traffic, which we channel through a threshold mix instantiated in a Trusted Execution Environment within the news organisation. CoverDrop is designed to resist a powerful global adversary with the ability to issue warrants against infrastructure providers, yet it can easily be integrated into existing infrastructure. We present the results from our workshops, describe CoverDrop’s design and demonstrate its security and performance.
摘要在一个普遍监控的世界里,告密是危险的,但许多主流报纸希望消息来源用不安全或几乎不可用的方法联系他们。为了做得更好,我们与英国新闻机构举办了两次研讨会,并调查了主要媒体的举报选项和指导方针。我们得出的结论是,软点是消息来源和记者之间建立初步联系和信任的系统。CoverDrop是一个双向、安全的系统。我们支持新闻应用程序中的安全消息传递,以便其所有其他用户提供封面流量,我们通过新闻机构内可信执行环境中实例化的阈值组合来引导封面流量。CoverDrop旨在抵御强大的全球对手,能够向基础设施提供商发出认股权证,但它可以很容易地集成到现有的基础设施中。我们展示了研讨会的成果,介绍了CoverDrop的设计,并展示了其安全性和性能。
{"title":"CoverDrop: Blowing the Whistle Through A News App","authors":"Mansoor Ahmed-Rengers, Diana A. Vasile, Daniel Hugenroth, A. Beresford, Ross Anderson","doi":"10.2478/popets-2022-0035","DOIUrl":"https://doi.org/10.2478/popets-2022-0035","url":null,"abstract":"Abstract Whistleblowing is hazardous in a world of pervasive surveillance, yet many leading newspapers expect sources to contact them with methods that are either insecure or barely usable. In an attempt to do better, we conducted two workshops with British news organisations and surveyed whistleblowing options and guidelines at major media outlets. We concluded that the soft spot is a system for initial contact and trust establishment between sources and reporters. CoverDrop is a two-way, secure system to do this. We support secure messaging within a news app, so that all its other users provide cover traffic, which we channel through a threshold mix instantiated in a Trusted Execution Environment within the news organisation. CoverDrop is designed to resist a powerful global adversary with the ability to issue warrants against infrastructure providers, yet it can easily be integrated into existing infrastructure. We present the results from our workshops, describe CoverDrop’s design and demonstrate its security and performance.","PeriodicalId":74556,"journal":{"name":"Proceedings on Privacy Enhancing Technologies. Privacy Enhancing Technologies Symposium","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48583083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Increasing Adoption of Tor Browser Using Informational and Planning Nudges 使用信息和计划推动增加Tor浏览器的采用
Peter Story, Daniel Smullen, Rex Chen, Yaxing Yao, A. Acquisti, Lorrie Faith Cranor, N. Sadeh, F. Schaub
Abstract Browsing privacy tools can help people protect their digital privacy. However, tools which provide the strongest protections—such as Tor Browser—have struggled to achieve widespread adoption. This may be due to usability challenges, misconceptions, behavioral biases, or mere lack of awareness. In this study, we test the effectiveness of nudging interventions that encourage the adoption of Tor Browser. First, we test an informational nudge based on protection motivation theory (PMT), designed to raise awareness of Tor Browser and help participants form accurate perceptions of it. Next, we add an action planning implementation intention, designed to help participants identify opportunities for using Tor Browser. Finally, we add a coping planning implementation intention, designed to help participants overcome challenges to using Tor Browser, such as extreme website slowness. We test these nudges in a longitudinal field experiment with 537 participants. We find that our PMT-based intervention increased use of Tor Browser in both the short- and long-term. Our coping planning nudge also increased use of Tor Browser, but only in the week following our intervention. We did not find statistically significant evidence of our action planning nudge increasing use of Tor Browser. Our study contributes to a greater understanding of factors influencing the adoption of Tor Browser, and how nudges might be used to encourage the adoption of Tor Browser and similar privacy enhancing technologies.
浏览隐私工具可以帮助人们保护自己的数字隐私。然而,提供最强保护的工具——比如Tor浏览器——一直在努力实现广泛采用。这可能是由于可用性挑战、误解、行为偏见或仅仅缺乏意识。在这项研究中,我们测试了鼓励采用Tor浏览器的助推干预措施的有效性。首先,我们测试了基于保护动机理论(PMT)的信息推动,旨在提高对Tor浏览器的认识,并帮助参与者形成准确的感知。接下来,我们添加了一个行动计划实施意图,旨在帮助参与者识别使用Tor浏览器的机会。最后,我们增加了应对计划实施意图,旨在帮助参与者克服使用Tor浏览器的挑战,例如极端的网站速度慢。我们在537名参与者的纵向现场实验中测试了这些推动。我们发现基于pmt的干预在短期和长期都增加了Tor浏览器的使用。我们的应对计划推动也增加了Tor浏览器的使用,但只是在我们干预后的一周内。我们没有发现统计上有意义的证据表明我们的行动计划推动了Tor浏览器的使用。我们的研究有助于更好地理解影响Tor浏览器采用的因素,以及如何使用轻推来鼓励采用Tor浏览器和类似的隐私增强技术。
{"title":"Increasing Adoption of Tor Browser Using Informational and Planning Nudges","authors":"Peter Story, Daniel Smullen, Rex Chen, Yaxing Yao, A. Acquisti, Lorrie Faith Cranor, N. Sadeh, F. Schaub","doi":"10.2478/popets-2022-0040","DOIUrl":"https://doi.org/10.2478/popets-2022-0040","url":null,"abstract":"Abstract Browsing privacy tools can help people protect their digital privacy. However, tools which provide the strongest protections—such as Tor Browser—have struggled to achieve widespread adoption. This may be due to usability challenges, misconceptions, behavioral biases, or mere lack of awareness. In this study, we test the effectiveness of nudging interventions that encourage the adoption of Tor Browser. First, we test an informational nudge based on protection motivation theory (PMT), designed to raise awareness of Tor Browser and help participants form accurate perceptions of it. Next, we add an action planning implementation intention, designed to help participants identify opportunities for using Tor Browser. Finally, we add a coping planning implementation intention, designed to help participants overcome challenges to using Tor Browser, such as extreme website slowness. We test these nudges in a longitudinal field experiment with 537 participants. We find that our PMT-based intervention increased use of Tor Browser in both the short- and long-term. Our coping planning nudge also increased use of Tor Browser, but only in the week following our intervention. We did not find statistically significant evidence of our action planning nudge increasing use of Tor Browser. Our study contributes to a greater understanding of factors influencing the adoption of Tor Browser, and how nudges might be used to encourage the adoption of Tor Browser and similar privacy enhancing technologies.","PeriodicalId":74556,"journal":{"name":"Proceedings on Privacy Enhancing Technologies. Privacy Enhancing Technologies Symposium","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2022-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45402723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
期刊
Proceedings on Privacy Enhancing Technologies. Privacy Enhancing Technologies Symposium
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1