首页 > 最新文献

Proceedings on Privacy Enhancing Technologies. Privacy Enhancing Technologies Symposium最新文献

英文 中文
Analyzing the Feasibility and Generalizability of Fingerprinting Internet of Things Devices 指纹物联网设备的可行性及推广分析
Dilawer Ahmed, Anupam Das, Fareed Zaffar
Abstract In recent years, we have seen rapid growth in the use and adoption of Internet of Things (IoT) devices. However, some loT devices are sensitive in nature, and simply knowing what devices a user owns can have security and privacy implications. Researchers have, therefore, looked at fingerprinting loT devices and their activities from encrypted network traffic. In this paper, we analyze the feasibility of fingerprinting IoT devices and evaluate the robustness of such fingerprinting approach across multiple independent datasets — collected under different settings. We show that not only is it possible to effectively fingerprint 188 loT devices (with over 97% accuracy), but also to do so even with multiple instances of the same make-and-model device. We also analyze the extent to which temporal, spatial and data-collection-methodology differences impact fingerprinting accuracy. Our analysis sheds light on features that are more robust against varying conditions. Lastly, we comprehensively analyze the performance of our approach under an open-world setting and propose ways in which an adversary can enhance their odds of inferring additional information about unseen devices (e.g., similar devices manufactured by the same company).
近年来,我们看到物联网(IoT)设备的使用和采用快速增长。然而,一些loT设备本质上是敏感的,仅仅知道用户拥有什么设备就可能涉及安全和隐私问题。因此,研究人员从加密的网络流量中研究了指纹loT设备及其活动。在本文中,我们分析了指纹物联网设备的可行性,并评估了这种指纹方法在不同设置下收集的多个独立数据集上的鲁棒性。我们证明,不仅可以有效地识别188个loT设备(准确率超过97%),而且即使是同一品牌和型号的设备的多个实例也可以这样做。我们还分析了时间、空间和数据收集方法差异对指纹识别准确性的影响程度。我们的分析揭示了在不同条件下更健壮的特征。最后,我们全面分析了我们的方法在开放世界环境下的性能,并提出了一些方法,这些方法可以提高对手推断未见设备(例如,由同一公司制造的类似设备)的额外信息的几率。
{"title":"Analyzing the Feasibility and Generalizability of Fingerprinting Internet of Things Devices","authors":"Dilawer Ahmed, Anupam Das, Fareed Zaffar","doi":"10.2478/popets-2022-0057","DOIUrl":"https://doi.org/10.2478/popets-2022-0057","url":null,"abstract":"Abstract In recent years, we have seen rapid growth in the use and adoption of Internet of Things (IoT) devices. However, some loT devices are sensitive in nature, and simply knowing what devices a user owns can have security and privacy implications. Researchers have, therefore, looked at fingerprinting loT devices and their activities from encrypted network traffic. In this paper, we analyze the feasibility of fingerprinting IoT devices and evaluate the robustness of such fingerprinting approach across multiple independent datasets — collected under different settings. We show that not only is it possible to effectively fingerprint 188 loT devices (with over 97% accuracy), but also to do so even with multiple instances of the same make-and-model device. We also analyze the extent to which temporal, spatial and data-collection-methodology differences impact fingerprinting accuracy. Our analysis sheds light on features that are more robust against varying conditions. Lastly, we comprehensively analyze the performance of our approach under an open-world setting and propose ways in which an adversary can enhance their odds of inferring additional information about unseen devices (e.g., similar devices manufactured by the same company).","PeriodicalId":74556,"journal":{"name":"Proceedings on Privacy Enhancing Technologies. Privacy Enhancing Technologies Symposium","volume":"2022 1","pages":"578 - 600"},"PeriodicalIF":0.0,"publicationDate":"2022-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47824182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Understanding Privacy-Related Advice on Stack Overflow 了解有关堆栈溢出的隐私相关建议
Mohammad Tahaei, Tianshi Li, Kami Vaniea
Abstract Privacy tasks can be challenging for developers, resulting in privacy frameworks and guidelines from the research community which are designed to assist developers in considering privacy features and applying privacy enhancing technologies in early stages of software development. However, how developers engage with privacy design strategies is not yet well understood. In this work, we look at the types of privacy-related advice developers give each other and how that advice maps to Hoepman’s privacy design strategies. We qualitatively analyzed 119 privacy-related accepted answers on Stack Overflow from the past five years and extracted 148 pieces of advice from these answers. We find that the advice is mostly around compliance with regulations and ensuring confidentiality with a focus on the inform, hide, control, and minimize of the Hoepman’s privacy design strategies. Other strategies, abstract, separate, enforce, and demonstrate, are rarely advised. Answers often include links to official documentation and online articles, highlighting the value of both official documentation and other informal materials such as blog posts. We make recommendations for promoting the under-stated strategies through tools, and detail the importance of providing better developer support to handle third-party data practices.
摘要隐私任务对开发人员来说可能具有挑战性,因此研究界制定了隐私框架和指南,旨在帮助开发人员在软件开发的早期阶段考虑隐私功能并应用隐私增强技术。然而,开发人员如何参与隐私设计策略尚不清楚。在这项工作中,我们研究了开发人员相互提供的与隐私相关的建议类型,以及这些建议如何与Hoepman的隐私设计策略相匹配。我们对过去五年中Stack Overflow上119个与隐私相关的公认答案进行了定性分析,并从这些答案中提取了148条建议。我们发现,建议主要围绕遵守法规和确保机密性,重点是告知、隐藏、控制和最小化Hoepman的隐私设计策略。其他策略,抽象的、分离的、强制执行的和演示的,很少被建议。答案通常包括官方文件和在线文章的链接,强调官方文件和其他非正式材料(如博客文章)的价值。我们提出了通过工具推广未充分说明的策略的建议,并详细说明了提供更好的开发人员支持以处理第三方数据实践的重要性。
{"title":"Understanding Privacy-Related Advice on Stack Overflow","authors":"Mohammad Tahaei, Tianshi Li, Kami Vaniea","doi":"10.2478/popets-2022-0038","DOIUrl":"https://doi.org/10.2478/popets-2022-0038","url":null,"abstract":"Abstract Privacy tasks can be challenging for developers, resulting in privacy frameworks and guidelines from the research community which are designed to assist developers in considering privacy features and applying privacy enhancing technologies in early stages of software development. However, how developers engage with privacy design strategies is not yet well understood. In this work, we look at the types of privacy-related advice developers give each other and how that advice maps to Hoepman’s privacy design strategies. We qualitatively analyzed 119 privacy-related accepted answers on Stack Overflow from the past five years and extracted 148 pieces of advice from these answers. We find that the advice is mostly around compliance with regulations and ensuring confidentiality with a focus on the inform, hide, control, and minimize of the Hoepman’s privacy design strategies. Other strategies, abstract, separate, enforce, and demonstrate, are rarely advised. Answers often include links to official documentation and online articles, highlighting the value of both official documentation and other informal materials such as blog posts. We make recommendations for promoting the under-stated strategies through tools, and detail the importance of providing better developer support to handle third-party data practices.","PeriodicalId":74556,"journal":{"name":"Proceedings on Privacy Enhancing Technologies. Privacy Enhancing Technologies Symposium","volume":"2022 1","pages":"114 - 131"},"PeriodicalIF":0.0,"publicationDate":"2022-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47937492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
How to prove any NP statement jointly? Efficient Distributed-prover Zero-Knowledge Protocols 如何联合证明任何NP命题?高效的分布式证明者零知识协议
Pankaj Dayama, A. Patra, Protik Paul, Nitin Singh, Dhinakaran Vinayagamurthy
Abstract Traditional zero-knowledge protocols have been studied and optimized for the setting where a single prover holds the complete witness and tries to convince a verifier about a predicate on the witness, without revealing any additional information to the verifier. In this work, we study the notion of distributed-prover zero knowledge (DPZK) for arbitrary predicates where the witness is shared among multiple mutually distrusting provers and they want to convince a verifier that their shares together satisfy the predicate. We make the following contributions to the notion of distributed proof generation: (i) we propose a new MPC-style security definition to capture the adversarial settings possible for different collusion models between the provers and the verifier, (ii) we discuss new efficiency parameters for distributed proof generation such as the number of rounds of interaction and the amount of communication among the provers, and (iii) we propose a compiler that realizes distributed proof generation from the zero-knowledge protocols in the Interactive Oracle Proofs (IOP) paradigm. Our compiler can be used to obtain DPZK from arbitrary IOP protocols, but the concrete efficiency overheads are substantial in general. To this end, we contribute (iv) a new zero-knowledge IOP Graphene which can be compiled into an efficient DPZK protocol. The (D + 1)-DPZK protocol D-Graphene, with D provers and one verifier, admits O(N1/c) proof size with a communication complexity of O(D2 ·(N1−2/c+Ns)), where N is the number of gates in the arithmetic circuit representing the predicate and Ns is the number of wires that depends on inputs from two or more parties. Significantly, only the distributed proof generation in D-Graphene requires interaction among the provers. D-Graphene compares favourably with the DPZK protocols obtained from the state-of-art zero-knowledge protocols, even those not modelled as IOPs.
传统的零知识协议已经被研究和优化,用于单个证明者持有完整的见证,并试图说服验证者关于见证上的谓词,而不向验证者透露任何额外的信息。在这项工作中,我们研究了任意谓词的分布式证明者零知识(DPZK)的概念,其中见证人在多个相互不信任的证明者之间共享,并且他们想要说服验证者他们的共享共同满足谓词。我们对分布式证明生成的概念做出了以下贡献:(i)我们提出了一个新的mpc风格的安全定义,以捕获证明者和验证者之间不同串合模型可能的对抗性设置,(ii)我们讨论了分布式证明生成的新效率参数,如交互的轮数和证明者之间的通信量,以及(iii)我们提出了一个编译器,该编译器在交互式Oracle证明(IOP)范式中实现了零知识协议的分布式证明生成。我们的编译器可用于从任意IOP协议中获得DPZK,但通常具体的效率开销是很大的。为此,我们贡献了(iv)一个新的零知识IOP石墨烯,它可以编译成一个有效的DPZK协议。(D + 1)-DPZK协议D-石墨烯,具有D个证明者和一个验证者,承认O(N1/c)证明大小,通信复杂度为O(D2·(N1−2/c+Ns)),其中N是表示谓词的算术电路中的门数,N是依赖于两个或更多方输入的线数。值得注意的是,只有d -石墨烯中的分布式证明生成需要证明者之间的相互作用。d -石墨烯优于从最先进的零知识协议中获得的DPZK协议,即使是那些没有建模为IOPs的协议。
{"title":"How to prove any NP statement jointly? Efficient Distributed-prover Zero-Knowledge Protocols","authors":"Pankaj Dayama, A. Patra, Protik Paul, Nitin Singh, Dhinakaran Vinayagamurthy","doi":"10.2478/popets-2022-0055","DOIUrl":"https://doi.org/10.2478/popets-2022-0055","url":null,"abstract":"Abstract Traditional zero-knowledge protocols have been studied and optimized for the setting where a single prover holds the complete witness and tries to convince a verifier about a predicate on the witness, without revealing any additional information to the verifier. In this work, we study the notion of distributed-prover zero knowledge (DPZK) for arbitrary predicates where the witness is shared among multiple mutually distrusting provers and they want to convince a verifier that their shares together satisfy the predicate. We make the following contributions to the notion of distributed proof generation: (i) we propose a new MPC-style security definition to capture the adversarial settings possible for different collusion models between the provers and the verifier, (ii) we discuss new efficiency parameters for distributed proof generation such as the number of rounds of interaction and the amount of communication among the provers, and (iii) we propose a compiler that realizes distributed proof generation from the zero-knowledge protocols in the Interactive Oracle Proofs (IOP) paradigm. Our compiler can be used to obtain DPZK from arbitrary IOP protocols, but the concrete efficiency overheads are substantial in general. To this end, we contribute (iv) a new zero-knowledge IOP Graphene which can be compiled into an efficient DPZK protocol. The (D + 1)-DPZK protocol D-Graphene, with D provers and one verifier, admits O(N1/c) proof size with a communication complexity of O(D2 ·(N1−2/c+Ns)), where N is the number of gates in the arithmetic circuit representing the predicate and Ns is the number of wires that depends on inputs from two or more parties. Significantly, only the distributed proof generation in D-Graphene requires interaction among the provers. D-Graphene compares favourably with the DPZK protocols obtained from the state-of-art zero-knowledge protocols, even those not modelled as IOPs.","PeriodicalId":74556,"journal":{"name":"Proceedings on Privacy Enhancing Technologies. Privacy Enhancing Technologies Symposium","volume":"2022 1","pages":"517 - 556"},"PeriodicalIF":0.0,"publicationDate":"2022-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41499208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Revisiting Identification Issues in GDPR ‘Right Of Access’ Policies: A Technical and Longitudinal Analysis 重新审视GDPR“访问权”政策中的识别问题:技术和纵向分析
Mariano Di Martino, Isaac Meers, P. Quax, Kenneth M. Andries, W. Lamotte
Abstract Several data protection regulations permit individuals to request all personal information that an organization holds about them by utilizing Subject Access Requests (SARs). Prior work has observed the identification process of such requests, demonstrating weak policies that are vulnerable to potential data breaches. In this paper, we analyze and compare prior work in terms of methodologies, requested identification credentials and threat models in the context of privacy and cybersecurity. Furthermore, we have devised a longitudinal study in which we examine the impact of responsible disclosures by re-evaluating the SAR authentication processes of 40 organizations after they had two years to improve their policies. Here, we demonstrate that 53% of the previously vulnerable organizations have not corrected their policy and an additional 27% of previously non-vulnerable organizations have potentially weakened their policies instead of improving them, thus leaking sensitive personal information to potential adversaries. To better understand state-of-the-art SAR policies, we interviewed several Data Protection Officers and explored the reasoning behind their processes from a viewpoint in the industry and gained insights about potential criminal abuse of weak SAR policies. Finally, we propose several technical modifications to SAR policies that reduce privacy and security risks of data controllers.
一些数据保护条例允许个人通过使用主体访问请求(sar)来请求组织持有的关于他们的所有个人信息。之前的工作观察了此类请求的识别过程,证明了脆弱的策略容易受到潜在数据泄露的影响。在本文中,我们在隐私和网络安全的背景下,从方法、请求的身份凭证和威胁模型方面分析和比较了先前的工作。此外,我们设计了一项纵向研究,通过重新评估40家组织在两年内改进其政策后的SAR认证流程,我们检查了负责任披露的影响。在这里,我们证明了53%以前易受攻击的组织没有纠正他们的政策,另外27%以前非易受攻击的组织可能会削弱他们的政策,而不是改进它们,从而将敏感的个人信息泄露给潜在的对手。为了更好地了解最新的SAR政策,我们采访了几位数据保护官员,从行业的角度探讨了他们的流程背后的原因,并深入了解了对弱SAR政策的潜在犯罪滥用。最后,我们对SAR策略提出了一些技术修改,以减少数据控制者的隐私和安全风险。
{"title":"Revisiting Identification Issues in GDPR ‘Right Of Access’ Policies: A Technical and Longitudinal Analysis","authors":"Mariano Di Martino, Isaac Meers, P. Quax, Kenneth M. Andries, W. Lamotte","doi":"10.2478/popets-2022-0037","DOIUrl":"https://doi.org/10.2478/popets-2022-0037","url":null,"abstract":"Abstract Several data protection regulations permit individuals to request all personal information that an organization holds about them by utilizing Subject Access Requests (SARs). Prior work has observed the identification process of such requests, demonstrating weak policies that are vulnerable to potential data breaches. In this paper, we analyze and compare prior work in terms of methodologies, requested identification credentials and threat models in the context of privacy and cybersecurity. Furthermore, we have devised a longitudinal study in which we examine the impact of responsible disclosures by re-evaluating the SAR authentication processes of 40 organizations after they had two years to improve their policies. Here, we demonstrate that 53% of the previously vulnerable organizations have not corrected their policy and an additional 27% of previously non-vulnerable organizations have potentially weakened their policies instead of improving them, thus leaking sensitive personal information to potential adversaries. To better understand state-of-the-art SAR policies, we interviewed several Data Protection Officers and explored the reasoning behind their processes from a viewpoint in the industry and gained insights about potential criminal abuse of weak SAR policies. Finally, we propose several technical modifications to SAR policies that reduce privacy and security risks of data controllers.","PeriodicalId":74556,"journal":{"name":"Proceedings on Privacy Enhancing Technologies. Privacy Enhancing Technologies Symposium","volume":"2022 1","pages":"95 - 113"},"PeriodicalIF":0.0,"publicationDate":"2022-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41733982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Who Knows I Like Jelly Beans? An Investigation Into Search Privacy 谁知道我喜欢果冻豆?搜索隐私调查
Daniel Kats, David Silva, Johann Roturier
Abstract Internal site search is an integral part of how users navigate modern sites, from restaurant reservations to house hunting to searching for medical solutions. Search terms on these sites may contain sensitive information such as location, medical information, or sexual preferences; when further coupled with a user’s IP address or a browser’s user agent string, this information can become very specific, and in some cases possibly identifying. In this paper, we measure the various ways by which search terms are sent to third parties when a user submits a search query. We developed a methodology for identifying and interacting with search components, which we implemented on top of an instrumented headless browser. We used this crawler to visit the Tranco top one million websites and analyzed search term leakage across three vectors: URL query parameters, payloads, and the Referer HTTP header. Our crawler found that 512,701 of the top 1 million sites had internal site search. We found that 81.3% of websites containing internal site search sent (or leaked from a user’s perspective) our search terms to third parties in some form. We then compared our results to the expected results based on a natural language analysis of the privacy policies of those leaking websites (where available) and found that about 87% of those privacy policies do not mention search terms explicitly. However, about 75% of these privacy policies seem to mention the sharing of some information with third-parties in a generic manner. We then present a few countermeasures, including a browser extension to warn users about imminent search term leakage to third parties. We conclude this paper by making recommendations on clarifying the privacy implications of internal site search to end users.
摘要内部网站搜索是用户浏览现代网站的一个组成部分,从餐厅预订到找房,再到搜索医疗解决方案。这些网站上的搜索词可能包含敏感信息,如位置、医疗信息或性偏好;当进一步与用户的IP地址或浏览器的用户代理字符串相结合时,这些信息可能会变得非常具体,在某些情况下可能会进行识别。在本文中,我们衡量了当用户提交搜索查询时,将搜索词发送给第三方的各种方式。我们开发了一种用于识别搜索组件并与之交互的方法,该方法是在插入指令的无头浏览器上实现的。我们使用这个爬虫访问了Tranco排名前100万的网站,并分析了三个向量的搜索词泄漏:URL查询参数、有效载荷和Referer HTTP标头。我们的爬虫发现,在排名前100万的网站中,有512701个进行了内部网站搜索。我们发现,81.3%的包含内部网站搜索的网站以某种形式向第三方发送(或从用户的角度泄露)我们的搜索词。然后,我们将我们的结果与基于对泄露网站隐私政策(如有)的自然语言分析的预期结果进行了比较,发现约87%的隐私政策没有明确提及搜索词。然而,在这些隐私政策中,约75%似乎提到了以通用方式与第三方共享某些信息。然后,我们提出了一些对策,包括浏览器扩展,以警告用户搜索词即将泄露给第三方。最后,我们就澄清内部网站搜索对最终用户的隐私影响提出了建议。
{"title":"Who Knows I Like Jelly Beans? An Investigation Into Search Privacy","authors":"Daniel Kats, David Silva, Johann Roturier","doi":"10.2478/popets-2022-0053","DOIUrl":"https://doi.org/10.2478/popets-2022-0053","url":null,"abstract":"Abstract Internal site search is an integral part of how users navigate modern sites, from restaurant reservations to house hunting to searching for medical solutions. Search terms on these sites may contain sensitive information such as location, medical information, or sexual preferences; when further coupled with a user’s IP address or a browser’s user agent string, this information can become very specific, and in some cases possibly identifying. In this paper, we measure the various ways by which search terms are sent to third parties when a user submits a search query. We developed a methodology for identifying and interacting with search components, which we implemented on top of an instrumented headless browser. We used this crawler to visit the Tranco top one million websites and analyzed search term leakage across three vectors: URL query parameters, payloads, and the Referer HTTP header. Our crawler found that 512,701 of the top 1 million sites had internal site search. We found that 81.3% of websites containing internal site search sent (or leaked from a user’s perspective) our search terms to third parties in some form. We then compared our results to the expected results based on a natural language analysis of the privacy policies of those leaking websites (where available) and found that about 87% of those privacy policies do not mention search terms explicitly. However, about 75% of these privacy policies seem to mention the sharing of some information with third-parties in a generic manner. We then present a few countermeasures, including a browser extension to warn users about imminent search term leakage to third parties. We conclude this paper by making recommendations on clarifying the privacy implications of internal site search to end users.","PeriodicalId":74556,"journal":{"name":"Proceedings on Privacy Enhancing Technologies. Privacy Enhancing Technologies Symposium","volume":"2022 1","pages":"426 - 446"},"PeriodicalIF":0.0,"publicationDate":"2022-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48923575","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Comprehensive Analysis of Privacy Leakage in Vertical Federated Learning During Prediction 垂直联邦学习预测过程中隐私泄露的综合分析
Xue Jiang, Xuebing Zhou, Jens Grossklags
Abstract Vertical federated learning (VFL), a variant of federated learning, has recently attracted increasing attention. An active party having the true labels jointly trains a model with other parties (referred to as passive parties) in order to use more features to achieve higher model accuracy. During the prediction phase, all the parties collaboratively compute the predicted confidence scores of each target record and the results will be finally returned to the active party. However, a recent study by Luo et al. [28] pointed out that the active party can use these confidence scores to reconstruct passive-party features and cause severe privacy leakage. In this paper, we conduct a comprehensive analysis of privacy leakage in VFL frameworks during the prediction phase. Our study improves on previous work [28] regarding two aspects. We first design a general gradient-based reconstruction attack framework that can be flexibly applied to simple logistic regression models as well as multi-layer neural networks. Moreover, besides performing the attack under the white-box setting, we give the first attempt to conduct the attack under the black-box setting. Extensive experiments on a number of real-world datasets show that our proposed attack is effective under different settings and can achieve at best twice or thrice of a reduction of attack error compared to previous work [28]. We further analyze a list of potential mitigation approaches and compare their privacy-utility performances. Experimental results demonstrate that privacy leakage from the confidence scores is a substantial privacy risk in VFL frameworks during the prediction phase, which cannot be simply solved by crypto-based confidentiality approaches. On the other hand, processing the confidence scores with information compression and randomization approaches can provide strengthened privacy protection.
摘要垂直联邦学习(Vertical federated learning, VFL)是联邦学习的一种变体,近年来受到越来越多的关注。拥有真实标签的主动方与其他方(称为被动方)共同训练模型,以使用更多的特征来达到更高的模型精度。在预测阶段,各方协同计算每条目标记录的预测置信度分数,最终将结果返回给活动方。然而,最近Luo等人的一项研究指出,主动方可以利用这些置信度得分来重构被动方的特征,从而造成严重的隐私泄露。在本文中,我们在预测阶段对VFL框架中的隐私泄漏进行了全面分析。我们的研究在两个方面改进了前人的工作b[28]。我们首先设计了一个通用的基于梯度的重构攻击框架,该框架可以灵活地应用于简单的逻辑回归模型和多层神经网络。此外,除了在白盒设置下进行攻击外,我们还首次尝试在黑盒设置下进行攻击。在大量真实数据集上进行的大量实验表明,我们提出的攻击在不同的设置下都是有效的,与以前的工作相比,最多可以将攻击误差减少两到三倍。我们进一步分析了一系列潜在的缓解方法,并比较了它们的隐私效用性能。实验结果表明,在VFL框架中,在预测阶段,置信度分数的隐私泄露是一个重大的隐私风险,不能简单地通过基于加密的保密方法来解决。另一方面,采用信息压缩和随机化的方法处理置信度分数,可以加强隐私保护。
{"title":"Comprehensive Analysis of Privacy Leakage in Vertical Federated Learning During Prediction","authors":"Xue Jiang, Xuebing Zhou, Jens Grossklags","doi":"10.2478/popets-2022-0045","DOIUrl":"https://doi.org/10.2478/popets-2022-0045","url":null,"abstract":"Abstract Vertical federated learning (VFL), a variant of federated learning, has recently attracted increasing attention. An active party having the true labels jointly trains a model with other parties (referred to as passive parties) in order to use more features to achieve higher model accuracy. During the prediction phase, all the parties collaboratively compute the predicted confidence scores of each target record and the results will be finally returned to the active party. However, a recent study by Luo et al. [28] pointed out that the active party can use these confidence scores to reconstruct passive-party features and cause severe privacy leakage. In this paper, we conduct a comprehensive analysis of privacy leakage in VFL frameworks during the prediction phase. Our study improves on previous work [28] regarding two aspects. We first design a general gradient-based reconstruction attack framework that can be flexibly applied to simple logistic regression models as well as multi-layer neural networks. Moreover, besides performing the attack under the white-box setting, we give the first attempt to conduct the attack under the black-box setting. Extensive experiments on a number of real-world datasets show that our proposed attack is effective under different settings and can achieve at best twice or thrice of a reduction of attack error compared to previous work [28]. We further analyze a list of potential mitigation approaches and compare their privacy-utility performances. Experimental results demonstrate that privacy leakage from the confidence scores is a substantial privacy risk in VFL frameworks during the prediction phase, which cannot be simply solved by crypto-based confidentiality approaches. On the other hand, processing the confidence scores with information compression and randomization approaches can provide strengthened privacy protection.","PeriodicalId":74556,"journal":{"name":"Proceedings on Privacy Enhancing Technologies. Privacy Enhancing Technologies Symposium","volume":"2022 1","pages":"263 - 281"},"PeriodicalIF":0.0,"publicationDate":"2022-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48930456","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
Efficient Set Membership Proofs using MPC-in-the-Head 在头部使用MPC的有效集成员证明
Aarushi Goel, M. Green, Mathias Hall-Andersen, Gabriel Kaptchuk
Abstract Set membership proofs are an invaluable part of privacy preserving systems. These proofs allow a prover to demonstrate knowledge of a witness w corresponding to a secret element x of a public set, such that they jointly satisfy a given NP relation, i.e. ℛ(w, x) = 1 and x is a member of a public set {x1, . . . , x𝓁}. This allows the identity of the prover to remain hidden, eg. ring signatures and confidential transactions in cryptocurrencies. In this work, we develop a new technique for efficiently adding logarithmic-sized set membership proofs to any MPC-in-the-head based zero-knowledge protocol (Ishai et al. [STOC’07]). We integrate our technique into an open source implementation of the state-of-the-art, post quantum secure zero-knowledge protocol of Katz et al. [CCS’18].We find that using our techniques to construct ring signatures results in signatures (based only on symmetric key primitives) that are between 5 and 10 times smaller than state-of-the-art techniques based on the same assumptions. We also show that our techniques can be used to efficiently construct post-quantum secure RingCT from only symmetric key primitives.
集隶属度证明是隐私保护系统的重要组成部分。这些证明允许证明者证明一个证人w对应于一个公共集合的秘密元素x的知识,使得它们共同满足给定的NP关系,即:∑(w, x) = 1,并且x是一个公共集合{x1,…x𝓁}。这允许证明者的身份保持隐藏,例如。加密货币中的环签名和机密交易。在这项工作中,我们开发了一种新技术,可以有效地将对数大小的集合成员证明添加到任何基于MPC-in-the-head的零知识协议中(Ishai等人[STOC ' 07])。我们将我们的技术集成到Katz等人的最先进的后量子安全零知识协议的开源实现中[CCS ' 18]。我们发现,使用我们的技术构造环签名(仅基于对称密钥原语)产生的签名比基于相同假设的最先进技术小5到10倍。我们还证明了我们的技术可以用于仅从对称密钥基元有效地构建后量子安全RingCT。
{"title":"Efficient Set Membership Proofs using MPC-in-the-Head","authors":"Aarushi Goel, M. Green, Mathias Hall-Andersen, Gabriel Kaptchuk","doi":"10.2478/popets-2022-0047","DOIUrl":"https://doi.org/10.2478/popets-2022-0047","url":null,"abstract":"Abstract Set membership proofs are an invaluable part of privacy preserving systems. These proofs allow a prover to demonstrate knowledge of a witness w corresponding to a secret element x of a public set, such that they jointly satisfy a given NP relation, i.e. ℛ(w, x) = 1 and x is a member of a public set {x1, . . . , x𝓁}. This allows the identity of the prover to remain hidden, eg. ring signatures and confidential transactions in cryptocurrencies. In this work, we develop a new technique for efficiently adding logarithmic-sized set membership proofs to any MPC-in-the-head based zero-knowledge protocol (Ishai et al. [STOC’07]). We integrate our technique into an open source implementation of the state-of-the-art, post quantum secure zero-knowledge protocol of Katz et al. [CCS’18].We find that using our techniques to construct ring signatures results in signatures (based only on symmetric key primitives) that are between 5 and 10 times smaller than state-of-the-art techniques based on the same assumptions. We also show that our techniques can be used to efficiently construct post-quantum secure RingCT from only symmetric key primitives.","PeriodicalId":74556,"journal":{"name":"Proceedings on Privacy Enhancing Technologies. Privacy Enhancing Technologies Symposium","volume":"2022 1","pages":"304 - 324"},"PeriodicalIF":0.0,"publicationDate":"2022-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43386179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Updatable Private Set Intersection 可更新的专用集交集
S. Badrinarayanan, Peihan Miao, Tiancheng Xie
Abstract Private set intersection (PSI) allows two mutually distrusting parties each with a set as input, to learn the intersection of both their sets without revealing anything more about their respective input sets. Traditionally, PSI studies the static setting where the computation is performed only once on both parties’ input sets. We initiate the study of updatable private set intersection (UPSI), which allows parties to compute the intersection of their private sets on a regular basis with sets that also constantly get updated. We consider two specific settings. In the first setting called UPSI with addition, parties can add new elements to their old sets. We construct two protocols in this setting, one allowing both parties to learn the output and the other only allowing one party to learn the output. In the second setting called UPSI with weak deletion, parties can additionally delete their old elements every t days. We present a protocol for this setting allowing both parties to learn the output. All our protocols are secure against semi-honest adversaries and have the guarantee that both the computational and communication complexity only grow with the set updates instead of the entire sets. Finally, we implement our UPSI with addition protocols and compare with the state-of-the-art PSI protocols. Our protocols compare favorably when the total set size is sufficiently large, the new updates are sufficiently small, or in networks with low bandwidth.
私有集合交集(Private set intersection, PSI)允许互不信任的双方各自以一个集合作为输入,在不透露任何有关各自输入集合的信息的情况下,了解其集合的交集。传统上,PSI研究静态设置,其中计算只对双方的输入集执行一次。我们开始研究可更新私有集交集(UPSI),它允许各方定期计算其私有集与也不断更新的集合的交集。我们考虑两个特定的设置。在第一种称为带加法的UPSI设置中,各方可以向其旧设置中添加新元素。在这种情况下,我们构建了两个协议,一个允许双方学习输出,另一个只允许一方学习输出。在第二种称为弱删除UPSI的设置中,各方可以每t天额外删除他们的旧元素。我们为这个设置提供了一个协议,允许双方学习输出。我们所有的协议对半诚实的对手都是安全的,并且保证计算和通信复杂性只随着集合更新而增加,而不是整个集合。最后,我们用附加协议实现我们的UPSI,并与最先进的PSI协议进行比较。当总集大小足够大,新的更新足够小,或者在低带宽的网络中,我们的协议比较有利。
{"title":"Updatable Private Set Intersection","authors":"S. Badrinarayanan, Peihan Miao, Tiancheng Xie","doi":"10.2478/popets-2022-0051","DOIUrl":"https://doi.org/10.2478/popets-2022-0051","url":null,"abstract":"Abstract Private set intersection (PSI) allows two mutually distrusting parties each with a set as input, to learn the intersection of both their sets without revealing anything more about their respective input sets. Traditionally, PSI studies the static setting where the computation is performed only once on both parties’ input sets. We initiate the study of updatable private set intersection (UPSI), which allows parties to compute the intersection of their private sets on a regular basis with sets that also constantly get updated. We consider two specific settings. In the first setting called UPSI with addition, parties can add new elements to their old sets. We construct two protocols in this setting, one allowing both parties to learn the output and the other only allowing one party to learn the output. In the second setting called UPSI with weak deletion, parties can additionally delete their old elements every t days. We present a protocol for this setting allowing both parties to learn the output. All our protocols are secure against semi-honest adversaries and have the guarantee that both the computational and communication complexity only grow with the set updates instead of the entire sets. Finally, we implement our UPSI with addition protocols and compare with the state-of-the-art PSI protocols. Our protocols compare favorably when the total set size is sufficiently large, the new updates are sufficiently small, or in networks with low bandwidth.","PeriodicalId":74556,"journal":{"name":"Proceedings on Privacy Enhancing Technologies. Privacy Enhancing Technologies Symposium","volume":"2022 1","pages":"378 - 406"},"PeriodicalIF":0.0,"publicationDate":"2022-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41971170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Building a Privacy-Preserving Smart Camera System 构建一个保护隐私的智能摄像头系统
Yohan Beugin, Quinn K. Burke, Blaine Hoak, Ryan Sheatsley, Eric Pauley, Gang Tan, Syed Rafiul Hussain, P. Mcdaniel
Abstract Millions of consumers depend on smart camera systems to remotely monitor their homes and businesses. However, the architecture and design of popular commercial systems require users to relinquish control of their data to untrusted third parties, such as service providers (e.g., the cloud). Third parties therefore can (and in some instances have) access the video footage without the users’ knowledge or consent—violating the core tenet of user privacy. In this paper, we present CaCTUs, a privacy-preserving smart Camera system Controlled Totally by Users. CaCTUs returns control to the user; the root of trust begins with the user and is maintained through a series of cryptographic protocols, designed to support popular features, such as sharing, deleting, and viewing videos live. We show that the system can support live streaming with a latency of 2 s at a frame rate of 10 fps and a resolution of 480 p. In so doing, we demonstrate that it is feasible to implement a performant smart-camera system that leverages the convenience of a cloud-based model while retaining the ability to control access to (private) data.
摘要数以百万计的消费者依靠智能摄像系统来远程监控他们的家庭和企业。然而,流行的商业系统的架构和设计要求用户将对其数据的控制权交给不受信任的第三方,例如服务提供商(例如,云)。因此,第三方可以(在某些情况下)在用户不知情或不同意的情况下访问视频片段,这违反了用户隐私的核心原则。在本文中,我们提出了一个完全由用户控制的保护隐私的智能相机系统CaCTU。CaCTU将控制权返回给用户;信任的根源始于用户,并通过一系列加密协议来维护,这些协议旨在支持流行的功能,如共享、删除和实时观看视频。我们证明,该系统可以支持延时为2s、帧速率为10fps、分辨率为480p的直播。通过这样做,我们证明了实现一个高性能智能相机系统是可行的,该系统利用了基于云的模型的便利性,同时保留了控制(私人)数据访问的能力。
{"title":"Building a Privacy-Preserving Smart Camera System","authors":"Yohan Beugin, Quinn K. Burke, Blaine Hoak, Ryan Sheatsley, Eric Pauley, Gang Tan, Syed Rafiul Hussain, P. Mcdaniel","doi":"10.2478/popets-2022-0034","DOIUrl":"https://doi.org/10.2478/popets-2022-0034","url":null,"abstract":"Abstract Millions of consumers depend on smart camera systems to remotely monitor their homes and businesses. However, the architecture and design of popular commercial systems require users to relinquish control of their data to untrusted third parties, such as service providers (e.g., the cloud). Third parties therefore can (and in some instances have) access the video footage without the users’ knowledge or consent—violating the core tenet of user privacy. In this paper, we present CaCTUs, a privacy-preserving smart Camera system Controlled Totally by Users. CaCTUs returns control to the user; the root of trust begins with the user and is maintained through a series of cryptographic protocols, designed to support popular features, such as sharing, deleting, and viewing videos live. We show that the system can support live streaming with a latency of 2 s at a frame rate of 10 fps and a resolution of 480 p. In so doing, we demonstrate that it is feasible to implement a performant smart-camera system that leverages the convenience of a cloud-based model while retaining the ability to control access to (private) data.","PeriodicalId":74556,"journal":{"name":"Proceedings on Privacy Enhancing Technologies. Privacy Enhancing Technologies Symposium","volume":"2022 1","pages":"25 - 46"},"PeriodicalIF":0.0,"publicationDate":"2022-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45645108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Visualizing Privacy-Utility Trade-Offs in Differentially Private Data Releases 可视化差异私有数据发布中的隐私效用权衡
Priyanka Nanayakkara, Johes Bater, Xi He, J. Hullman, Jennie Duggan
Abstract Organizations often collect private data and release aggregate statistics for the public’s benefit. If no steps toward preserving privacy are taken, adversaries may use released statistics to deduce unauthorized information about the individuals described in the private dataset. Differentially private algorithms address this challenge by slightly perturbing underlying statistics with noise, thereby mathematically limiting the amount of information that may be deduced from each data release. Properly calibrating these algorithms—and in turn the disclosure risk for people described in the dataset—requires a data curator to choose a value for a privacy budget parameter, ɛ. However, there is little formal guidance for choosing ɛ, a task that requires reasoning about the probabilistic privacy–utility tradeoff. Furthermore, choosing ɛ in the context of statistical inference requires reasoning about accuracy trade-offs in the presence of both measurement error and differential privacy (DP) noise. We present Visualizing Privacy (ViP), an interactive interface that visualizes relationships between ɛ, accuracy, and disclosure risk to support setting and splitting ɛ among queries. As a user adjusts ɛ, ViP dynamically updates visualizations depicting expected accuracy and risk. ViP also has an inference setting, allowing a user to reason about the impact of DP noise on statistical inferences. Finally, we present results of a study where 16 research practitioners with little to no DP background completed a set of tasks related to setting ɛ using both ViP and a control. We find that ViP helps participants more correctly answer questions related to judging the probability of where a DP-noised release is likely to fall and comparing between DP-noised and non-private confidence intervals.
为了公众的利益,组织经常收集私人数据并发布汇总统计数据。如果不采取保护隐私的措施,攻击者可能会使用已发布的统计数据来推断有关私有数据集中所描述的个人的未经授权的信息。差分私有算法通过用噪声稍微干扰底层统计数据来解决这一挑战,从而在数学上限制了可以从每次数据发布中推断出的信息量。正确地校准这些算法,以及数据集中所描述的人的披露风险,需要数据管理员为隐私预算参数选择一个值。然而,很少有关于选择的正式指导,这是一项需要对概率隐私-效用权衡进行推理的任务。此外,在统计推断的上下文中选择*需要在测量误差和差分隐私(DP)噪声存在的情况下对精度权衡进行推理。我们提出了可视化隐私(ViP),这是一个交互式界面,可以将数据、准确性和披露风险之间的关系可视化,以支持在查询中设置和分割数据。当用户调整系数时,ViP动态地更新描绘预期精度和风险的可视化。ViP也有一个推断设置,允许用户推断DP噪声对统计推断的影响。最后,我们展示了一项研究的结果,在这项研究中,16名几乎没有DP背景的研究从业者完成了一组与使用ViP和对照设置相关的任务。我们发现ViP可以帮助参与者更正确地回答与判断dp噪声释放可能落在哪里的概率以及比较dp噪声和非私有置信区间有关的问题。
{"title":"Visualizing Privacy-Utility Trade-Offs in Differentially Private Data Releases","authors":"Priyanka Nanayakkara, Johes Bater, Xi He, J. Hullman, Jennie Duggan","doi":"10.2478/popets-2022-0058","DOIUrl":"https://doi.org/10.2478/popets-2022-0058","url":null,"abstract":"Abstract Organizations often collect private data and release aggregate statistics for the public’s benefit. If no steps toward preserving privacy are taken, adversaries may use released statistics to deduce unauthorized information about the individuals described in the private dataset. Differentially private algorithms address this challenge by slightly perturbing underlying statistics with noise, thereby mathematically limiting the amount of information that may be deduced from each data release. Properly calibrating these algorithms—and in turn the disclosure risk for people described in the dataset—requires a data curator to choose a value for a privacy budget parameter, ɛ. However, there is little formal guidance for choosing ɛ, a task that requires reasoning about the probabilistic privacy–utility tradeoff. Furthermore, choosing ɛ in the context of statistical inference requires reasoning about accuracy trade-offs in the presence of both measurement error and differential privacy (DP) noise. We present Visualizing Privacy (ViP), an interactive interface that visualizes relationships between ɛ, accuracy, and disclosure risk to support setting and splitting ɛ among queries. As a user adjusts ɛ, ViP dynamically updates visualizations depicting expected accuracy and risk. ViP also has an inference setting, allowing a user to reason about the impact of DP noise on statistical inferences. Finally, we present results of a study where 16 research practitioners with little to no DP background completed a set of tasks related to setting ɛ using both ViP and a control. We find that ViP helps participants more correctly answer questions related to judging the probability of where a DP-noised release is likely to fall and comparing between DP-noised and non-private confidence intervals.","PeriodicalId":74556,"journal":{"name":"Proceedings on Privacy Enhancing Technologies. Privacy Enhancing Technologies Symposium","volume":"2022 1","pages":"601 - 618"},"PeriodicalIF":0.0,"publicationDate":"2022-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43389600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
期刊
Proceedings on Privacy Enhancing Technologies. Privacy Enhancing Technologies Symposium
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1