首页 > 最新文献

ACM Transactions on Privacy and Security最新文献

英文 中文
Resilience-by-design in Adaptive Multi-agent Traffic Control Systems 自适应多智能体交通控制系统的设计弹性
IF 2.3 4区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-06-26 DOI: https://dl.acm.org/doi/10.1145/3592799
Ranwa Al Mallah, Talal Halabi, Bilal Farooq

Connected and Autonomous Vehicles (CAVs) with their evolving data gathering capabilities will play a significant role in road safety and efficiency applications supported by Intelligent Transport Systems (ITSs), such as Traffic Signal Control (TSC) for urban traffic congestion management. However, their involvement will expand the space of security vulnerabilities and create larger threat vectors. In this article, we perform the first detailed security analysis and implementation of a new cyber-physical attack category carried out by the network of CAVs against Adaptive Multi-Agent Traffic Signal Control (AMATSC), namely, coordinated Sybil attacks, where vehicles with forged or fake identities try to alter the data collected by the AMATSC algorithms to sabotage their decisions. Consequently, a novel, game-theoretic mitigation approach at the application layer is proposed to minimize the impact of such sophisticated data corruption attacks. The devised minimax game model enables the AMATSC algorithm to generate optimal decisions under a suspected attack, improving its resilience. Extensive experimentation is performed on a traffic dataset provided by the city of Montréal under real-world intersection settings to evaluate the attack impact. Our results improved time loss on attacked intersections by approximately 48.9%. Substantial benefits can be gained from the mitigation, yielding more robust adaptive control of traffic across networked intersections.

具有不断发展的数据收集能力的联网和自动驾驶汽车(cav)将在智能交通系统(its)支持的道路安全和效率应用中发挥重要作用,例如用于城市交通拥堵管理的交通信号控制(TSC)。然而,他们的参与将扩大安全漏洞的空间,并产生更大的威胁向量。在本文中,我们对自动驾驶汽车网络针对自适应多智能体交通信号控制(AMATSC)进行的一种新的网络物理攻击类别进行了首次详细的安全分析和实施,即协调Sybil攻击,其中具有伪造或虚假身份的车辆试图改变由AMATSC算法收集的数据以破坏其决策。因此,在应用层提出了一种新颖的博弈论缓解方法,以最大限度地减少此类复杂数据损坏攻击的影响。所设计的极大极小对策模型使AMATSC算法能够在可疑攻击情况下产生最优决策,提高了算法的弹性。广泛的实验进行了交通数据集提供的城市蒙特拉西姆在现实世界的十字路口设置,以评估攻击的影响。我们的结果将受攻击路口的时间损失提高了约48.9%。从缓解中可以获得实质性的好处,产生更强大的跨网络十字路口的交通自适应控制。
{"title":"Resilience-by-design in Adaptive Multi-agent Traffic Control Systems","authors":"Ranwa Al Mallah, Talal Halabi, Bilal Farooq","doi":"https://dl.acm.org/doi/10.1145/3592799","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3592799","url":null,"abstract":"<p>Connected and Autonomous Vehicles (CAVs) with their evolving data gathering capabilities will play a significant role in road safety and efficiency applications supported by Intelligent Transport Systems (ITSs), such as Traffic Signal Control (TSC) for urban traffic congestion management. However, their involvement will expand the space of security vulnerabilities and create larger threat vectors. In this article, we perform the first detailed security analysis and implementation of a new cyber-physical attack category carried out by the network of CAVs against Adaptive Multi-Agent Traffic Signal Control (AMATSC), namely, coordinated Sybil attacks, where vehicles with forged or fake identities try to alter the data collected by the AMATSC algorithms to sabotage their decisions. Consequently, a novel, game-theoretic mitigation approach at the application layer is proposed to minimize the impact of such sophisticated data corruption attacks. The devised minimax game model enables the AMATSC algorithm to generate optimal decisions under a suspected attack, improving its resilience. Extensive experimentation is performed on a traffic dataset provided by the city of Montréal under real-world intersection settings to evaluate the attack impact. Our results improved time loss on attacked intersections by approximately 48.9%. Substantial benefits can be gained from the mitigation, yielding more robust adaptive control of traffic across networked intersections.</p>","PeriodicalId":56050,"journal":{"name":"ACM Transactions on Privacy and Security","volume":"21 1","pages":""},"PeriodicalIF":2.3,"publicationDate":"2023-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138540667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Privacy-preserving Decentralized Federated Learning over Time-varying Communication Graph 时变通信图上保护隐私的分散联邦学习
IF 2.3 4区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-06-26 DOI: https://dl.acm.org/doi/10.1145/3591354
Yang Lu, Zhengxin Yu, Neeraj Suri

Establishing how a set of learners can provide privacy-preserving federated learning in a fully decentralized (peer-to-peer, no coordinator) manner is an open problem. We propose the first privacy-preserving consensus-based algorithm for the distributed learners to achieve decentralized global model aggregation in an environment of high mobility, where participating learners and the communication graph between them may vary during the learning process. In particular, whenever the communication graph changes, the Metropolis-Hastings method [69] is applied to update the weighted adjacency matrix based on the current communication topology. In addition, the Shamir’s secret sharing (SSS) scheme [61] is integrated to facilitate privacy in reaching consensus of the global model. The article establishes the correctness and privacy properties of the proposed algorithm. The computational efficiency is evaluated by a simulation built on a federated learning framework with a real-world dataset.

建立一组学习器如何以完全分散(点对点,没有协调器)的方式提供保护隐私的联邦学习是一个开放的问题。本文提出了第一种基于共识的分布式学习算法,用于在高流动性环境下实现分布式全局模型聚合,该环境下参与学习的学习者及其之间的通信图可能在学习过程中发生变化。特别是,当通信图发生变化时,采用Metropolis-Hastings方法[69]根据当前通信拓扑更新加权邻接矩阵。此外,还集成了Shamir秘密共享(SSS)方案[61],以促进隐私达成全球模型的共识。本文建立了该算法的正确性和隐私性。通过建立在具有真实数据集的联邦学习框架上的仿真来评估计算效率。
{"title":"Privacy-preserving Decentralized Federated Learning over Time-varying Communication Graph","authors":"Yang Lu, Zhengxin Yu, Neeraj Suri","doi":"https://dl.acm.org/doi/10.1145/3591354","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3591354","url":null,"abstract":"<p>Establishing how a set of learners can provide privacy-preserving federated learning in a fully decentralized (peer-to-peer, no coordinator) manner is an open problem. We propose the first privacy-preserving consensus-based algorithm for the distributed learners to achieve decentralized global model aggregation in an environment of high mobility, where participating learners and the communication graph between them may vary during the learning process. In particular, whenever the communication graph changes, the Metropolis-Hastings method [69] is applied to update the weighted adjacency matrix based on the current communication topology. In addition, the Shamir’s secret sharing (SSS) scheme [61] is integrated to facilitate privacy in reaching consensus of the global model. The article establishes the correctness and privacy properties of the proposed algorithm. The computational efficiency is evaluated by a simulation built on a federated learning framework with a real-world dataset.</p>","PeriodicalId":56050,"journal":{"name":"ACM Transactions on Privacy and Security","volume":"74 1","pages":""},"PeriodicalIF":2.3,"publicationDate":"2023-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138540657","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
B3: Backdoor Attacks Against Black-Box Machine Learning Models B3:针对黑盒机器学习模型的后门攻击
IF 2.3 4区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-06-22 DOI: 10.1145/3605212
Xueluan Gong, Yanjiao Chen, Wenbin Yang, Huayang Huang, Qian Wang
Backdoor attacks aim to inject backdoors to victim machine learning models during training time, such that the backdoored model maintains the prediction power of the original model towards clean inputs and misbehaves towards backdoored inputs with the trigger. The reason for backdoor attacks is that resource-limited users usually download sophisticated models from model zoos or query the models from MLaaS rather than training a model from scratch, thus a malicious third party has a chance to provide a backdoored model. In general, the more precious the model provided (i.e., models trained on rare datasets), the more popular it is with users. In this paper, from a malicious model provider perspective, we propose a black-box backdoor attack, named B3, where neither the rare victim model (including the model architecture, parameters, and hyperparameters) nor the training data is available to the adversary. To facilitate backdoor attacks in the black-box scenario, we design a cost-effective model extraction method that leverages a carefully-constructed query dataset to steal the functionality of the victim model with a limited budget. As the trigger is key to successful backdoor attacks, we develop a novel trigger generation algorithm that intensifies the bond between the trigger and the targeted misclassification label through the neuron with the highest impact on the targeted label. Extensive experiments have been conducted on various simulated deep learning models and the commercial API of Alibaba Cloud Compute Service. We demonstrate that B3 has a high attack success rate and maintains high prediction accuracy for benign inputs. It is also shown that B3 is robust against state-of-the-art defense strategies against backdoor attacks, such as model pruning and NC.
后门攻击旨在在训练时间内向受害者机器学习模型注入后门,使得后门模型保持原始模型对干净输入的预测能力,并使用触发器对后门输入表现不佳。后门攻击的原因是,资源有限的用户通常从模型动物园下载复杂的模型或从MLaaS查询模型,而不是从头开始训练模型,因此恶意的第三方有机会提供后门模型。一般来说,所提供的模型(即在罕见数据集上训练的模型)越珍贵,就越受用户欢迎。在本文中,从恶意模型提供商的角度来看,我们提出了一种名为B3的黑匣子后门攻击,其中罕见的受害者模型(包括模型架构、参数和超参数)和训练数据都不可用于对手。为了促进黑匣子场景中的后门攻击,我们设计了一种经济高效的模型提取方法,该方法利用精心构建的查询数据集,以有限的预算窃取受害者模型的功能。由于触发器是成功后门攻击的关键,我们开发了一种新的触发器生成算法,该算法通过对目标标签影响最大的神经元来增强触发器和目标错误分类标签之间的联系。在各种模拟深度学习模型和阿里云计算服务的商业API上进行了广泛的实验。我们证明B3具有高的攻击成功率,并对良性输入保持高的预测精度。研究还表明,B3对最先进的后门攻击防御策略(如模型修剪和NC)具有鲁棒性。
{"title":"B3: Backdoor Attacks Against Black-Box Machine Learning Models","authors":"Xueluan Gong, Yanjiao Chen, Wenbin Yang, Huayang Huang, Qian Wang","doi":"10.1145/3605212","DOIUrl":"https://doi.org/10.1145/3605212","url":null,"abstract":"Backdoor attacks aim to inject backdoors to victim machine learning models during training time, such that the backdoored model maintains the prediction power of the original model towards clean inputs and misbehaves towards backdoored inputs with the trigger. The reason for backdoor attacks is that resource-limited users usually download sophisticated models from model zoos or query the models from MLaaS rather than training a model from scratch, thus a malicious third party has a chance to provide a backdoored model. In general, the more precious the model provided (i.e., models trained on rare datasets), the more popular it is with users. In this paper, from a malicious model provider perspective, we propose a black-box backdoor attack, named B3, where neither the rare victim model (including the model architecture, parameters, and hyperparameters) nor the training data is available to the adversary. To facilitate backdoor attacks in the black-box scenario, we design a cost-effective model extraction method that leverages a carefully-constructed query dataset to steal the functionality of the victim model with a limited budget. As the trigger is key to successful backdoor attacks, we develop a novel trigger generation algorithm that intensifies the bond between the trigger and the targeted misclassification label through the neuron with the highest impact on the targeted label. Extensive experiments have been conducted on various simulated deep learning models and the commercial API of Alibaba Cloud Compute Service. We demonstrate that B3 has a high attack success rate and maintains high prediction accuracy for benign inputs. It is also shown that B3 is robust against state-of-the-art defense strategies against backdoor attacks, such as model pruning and NC.","PeriodicalId":56050,"journal":{"name":"ACM Transactions on Privacy and Security","volume":"1 1","pages":""},"PeriodicalIF":2.3,"publicationDate":"2023-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41773668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
B3: Backdoor Attacks Against Black-Box Machine Learning Models B3:针对黑盒机器学习模型的后门攻击
IF 2.3 4区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-06-22 DOI: https://dl.acm.org/doi/10.1145/3605212
Xueluan Gong, Yanjiao Chen, Wenbin Yang, Huayang Huang, Qian Wang

Backdoor attacks aim to inject backdoors to victim machine learning models during training time, such that the backdoored model maintains the prediction power of the original model towards clean inputs and misbehaves towards backdoored inputs with the trigger. The reason for backdoor attacks is that resource-limited users usually download sophisticated models from model zoos or query the models from MLaaS rather than training a model from scratch, thus a malicious third party has a chance to provide a backdoored model. In general, the more precious the model provided (i.e., models trained on rare datasets), the more popular it is with users.

In this paper, from a malicious model provider perspective, we propose a black-box backdoor attack, named B3, where neither the rare victim model (including the model architecture, parameters, and hyperparameters) nor the training data is available to the adversary. To facilitate backdoor attacks in the black-box scenario, we design a cost-effective model extraction method that leverages a carefully-constructed query dataset to steal the functionality of the victim model with a limited budget. As the trigger is key to successful backdoor attacks, we develop a novel trigger generation algorithm that intensifies the bond between the trigger and the targeted misclassification label through the neuron with the highest impact on the targeted label. Extensive experiments have been conducted on various simulated deep learning models and the commercial API of Alibaba Cloud Compute Service. We demonstrate that B3 has a high attack success rate and maintains high prediction accuracy for benign inputs. It is also shown that B3 is robust against state-of-the-art defense strategies against backdoor attacks, such as model pruning and NC.

后门攻击的目的是在训练期间向受害机器学习模型注入后门,使后门模型保持原始模型对干净输入的预测能力,并在触发后对后门输入行为不端的行为。后门攻击的原因是资源有限的用户通常从模型动物园下载复杂的模型或从MLaaS查询模型,而不是从头开始训练模型,因此恶意第三方有机会提供后门模型。一般来说,提供的模型越珍贵(即在稀有数据集上训练的模型),就越受用户欢迎。在本文中,从恶意模型提供者的角度出发,我们提出了一种名为B3的黑盒后门攻击,在这种攻击中,攻击者既无法获得罕见的受害者模型(包括模型架构、参数和超参数),也无法获得训练数据。为了方便黑箱场景中的后门攻击,我们设计了一种经济高效的模型提取方法,该方法利用精心构建的查询数据集在有限的预算下窃取受害者模型的功能。由于触发器是成功后门攻击的关键,我们开发了一种新的触发器生成算法,该算法通过对目标标签影响最大的神经元来加强触发器与目标错误分类标签之间的联系。在各种模拟深度学习模型和阿里云计算服务的商业API上进行了大量的实验。我们证明了B3具有较高的攻击成功率,并且对良性输入保持较高的预测精度。研究还表明,B3对于最先进的针对后门攻击的防御策略(如模型修剪和NC)具有鲁棒性。
{"title":"B3: Backdoor Attacks Against Black-Box Machine Learning Models","authors":"Xueluan Gong, Yanjiao Chen, Wenbin Yang, Huayang Huang, Qian Wang","doi":"https://dl.acm.org/doi/10.1145/3605212","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3605212","url":null,"abstract":"<p>Backdoor attacks aim to inject backdoors to victim machine learning models during training time, such that the backdoored model maintains the prediction power of the original model towards clean inputs and misbehaves towards backdoored inputs with the trigger. The reason for backdoor attacks is that resource-limited users usually download sophisticated models from model zoos or query the models from MLaaS rather than training a model from scratch, thus a malicious third party has a chance to provide a backdoored model. In general, the more precious the model provided (i.e., models trained on rare datasets), the more popular it is with users. </p><p>In this paper, from a malicious model provider perspective, we propose a black-box backdoor attack, named <span>B<sup>3</sup></span>, where neither the rare victim model (including the model architecture, parameters, and hyperparameters) nor the training data is available to the adversary. To facilitate backdoor attacks in the black-box scenario, we design a cost-effective <i>model extraction</i> method that leverages a carefully-constructed query dataset to steal the functionality of the victim model with a limited budget. As the trigger is key to successful backdoor attacks, we develop a novel trigger generation algorithm that intensifies the bond between the trigger and the targeted misclassification label through the neuron with the highest impact on the targeted label. Extensive experiments have been conducted on various simulated deep learning models and the commercial API of Alibaba Cloud Compute Service. We demonstrate that <span>B<sup>3</sup></span> has a high attack success rate and maintains high prediction accuracy for benign inputs. It is also shown that <span>B<sup>3</sup></span> is robust against state-of-the-art defense strategies against backdoor attacks, such as model pruning and NC.</p>","PeriodicalId":56050,"journal":{"name":"ACM Transactions on Privacy and Security","volume":"124 1","pages":""},"PeriodicalIF":2.3,"publicationDate":"2023-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138540704","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Costs and Benefits of Authentication Advice 认证通知的成本和收益
IF 2.3 4区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-05-13 DOI: https://dl.acm.org/doi/10.1145/3588031
Hazel Murray, David Malone

Authentication security advice is given with the goal of guiding users and organisations towards secure actions and practices. In this article, a taxonomy of 270 pieces of authentication advice is created, and a survey is conducted to gather information on the costs associated with following or enforcing the advice. Our findings indicate that security advice can be ambiguous and contradictory, with 41% of the advice collected being contradicted by another source. Additionally, users reported high levels of frustration with the advice and identified high usability costs. The study also found that end-users disagreed with each other 71% of the time about whether a piece of advice was valuable or not. We define a formal approach to identifying security benefits of advice. Our research suggests that cost-benefit analysis is essential in understanding the value of enforcing security policies. Furthermore, we find that organisation investment in security seems to have better payoffs than mechanisms with high costs to users.

提供身份验证安全建议的目的是指导用户和组织采取安全行动和实践。在本文中,创建了一个包含270条身份验证通知的分类法,并进行了一项调查,以收集与遵循或执行该通知相关的成本信息。我们的研究结果表明,安全建议可能是模棱两可和矛盾的,收集到的建议中有41%与其他来源相矛盾。此外,用户报告了对建议的高度挫败感,并确定了高可用性成本。该研究还发现,71%的最终用户对某条建议是否有价值持不同意见。我们定义了一种正式的方法来确定通知的安全性好处。我们的研究表明,成本效益分析对于理解执行安全策略的价值至关重要。此外,我们发现组织在安全方面的投资似乎比用户成本高的机制有更好的回报。
{"title":"Costs and Benefits of Authentication Advice","authors":"Hazel Murray, David Malone","doi":"https://dl.acm.org/doi/10.1145/3588031","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3588031","url":null,"abstract":"<p>Authentication security advice is given with the goal of guiding users and organisations towards secure actions and practices. In this article, a taxonomy of 270 pieces of authentication advice is created, and a survey is conducted to gather information on the costs associated with following or enforcing the advice. Our findings indicate that security advice can be ambiguous and contradictory, with 41% of the advice collected being contradicted by another source. Additionally, users reported high levels of frustration with the advice and identified high usability costs. The study also found that end-users disagreed with each other 71% of the time about whether a piece of advice was valuable or not. We define a formal approach to identifying security benefits of advice. Our research suggests that cost-benefit analysis is essential in understanding the value of enforcing security policies. Furthermore, we find that organisation investment in security seems to have better payoffs than mechanisms with high costs to users.</p>","PeriodicalId":56050,"journal":{"name":"ACM Transactions on Privacy and Security","volume":"32 1","pages":""},"PeriodicalIF":2.3,"publicationDate":"2023-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138540666","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Privacy Policies across the Ages: Content of Privacy Policies 1996–2021 不同年龄的隐私政策:1996-2021年隐私政策的内容
IF 2.3 4区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-05-13 DOI: https://dl.acm.org/doi/10.1145/3590152
Isabel Wagner

It is well known that most users do not read privacy policies but almost always tick the box to agree with them. While the length and readability of privacy policies have been well studied and many approaches for policy analysis based on natural language processing have been proposed, existing studies are limited in their depth and scope, often focusing on a small number of data practices at single point in time. In this article, we fill this gap by analyzing the 25-year history of privacy policies using machine learning and natural language processing and presenting a comprehensive analysis of policy contents. Specifically, we collect a large-scale longitudinal corpus of privacy policies from 1996 to 2021 and analyze their content in terms of the data practices they describe, the rights they grant to users, and the rights they reserve for their organizations. We pay particular attention to changes in response to recent privacy regulations such as the GDPR and CCPA. We observe some positive changes, such as reductions in data collection post-GDPR, but also a range of concerning data practices, such as widespread implicit data collection for which users have no meaningful choices or access rights. Our work is an important step toward making privacy policies machine readable on the user side, which would help users match their privacy preferences against the policies offered by web services.

众所周知,大多数用户不阅读隐私政策,但几乎总是在方框上打勾表示同意。虽然隐私政策的长度和可读性已经得到了很好的研究,并且提出了许多基于自然语言处理的政策分析方法,但现有的研究在深度和范围上都是有限的,往往集中在单个时间点的少量数据实践上。在本文中,我们通过使用机器学习和自然语言处理分析隐私政策25年的历史,并对政策内容进行全面分析,填补了这一空白。具体来说,我们收集了1996年至2021年的大规模纵向隐私政策语料库,并从它们描述的数据实践、它们授予用户的权利以及它们为其组织保留的权利等方面分析了它们的内容。我们特别关注近期隐私法规(如GDPR和CCPA)的变化。我们观察到一些积极的变化,例如gdpr后数据收集的减少,但也有一系列相关的数据实践,例如广泛的隐性数据收集,用户没有有意义的选择或访问权。我们的工作是使隐私策略在用户端机器可读的重要一步,这将帮助用户将他们的隐私偏好与web服务提供的策略相匹配。
{"title":"Privacy Policies across the Ages: Content of Privacy Policies 1996–2021","authors":"Isabel Wagner","doi":"https://dl.acm.org/doi/10.1145/3590152","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3590152","url":null,"abstract":"<p>It is well known that most users do not read privacy policies but almost always tick the box to agree with them. While the length and readability of privacy policies have been well studied and many approaches for policy analysis based on natural language processing have been proposed, existing studies are limited in their depth and scope, often focusing on a small number of data practices at single point in time. In this article, we fill this gap by analyzing the 25-year history of privacy policies using machine learning and natural language processing and presenting a comprehensive analysis of policy contents. Specifically, we collect a large-scale longitudinal corpus of privacy policies from 1996 to 2021 and analyze their content in terms of the data practices they describe, the rights they grant to users, and the rights they reserve for their organizations. We pay particular attention to changes in response to recent privacy regulations such as the GDPR and CCPA. We observe some positive changes, such as reductions in data collection post-GDPR, but also a range of concerning data practices, such as widespread implicit data collection for which users have no meaningful choices or access rights. Our work is an important step toward making privacy policies machine readable on the user side, which would help users match their privacy preferences against the policies offered by web services.</p>","PeriodicalId":56050,"journal":{"name":"ACM Transactions on Privacy and Security","volume":"11 1","pages":""},"PeriodicalIF":2.3,"publicationDate":"2023-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138540649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PrivExtractor: Toward Redressing the Imbalance of Understanding between Virtual Assistant Users and Vendors PrivExtractor:解决虚拟助手用户和供应商之间理解的不平衡
IF 2.3 4区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-05-13 DOI: https://dl.acm.org/doi/10.1145/3588770
Tom Bolton, Tooska Dargahi, Sana Belguith, Carsten Maple

The use of voice-controlled virtual assistants (VAs) is significant, and user numbers increase every year. Extensive use of VAs has provided the large, cash-rich technology companies who sell them with another way of consuming users’ data, providing a lucrative revenue stream. Whilst these companies are legally obliged to treat users’ information “fairly and responsibly,” artificial intelligence techniques used to process data have become incredibly sophisticated, leading to users’ concerns that a lack of clarity is making it hard to understand the nature and scope of data collection and use.

There has been little work undertaken on a self-contained user awareness tool targeting VAs. PrivExtractor, a novel web-based awareness dashboard for VA users, intends to redress this imbalance of understanding between the data “processors” and the user. It aims to achieve this using the four largest VA vendors as a case study and providing a comparison function that examines the four companies’ privacy practices and their compliance with data protection law.

As a result of this research, we conclude that the companies studied are largely compliant with the law, as expected. However, the user remains disadvantaged due to the ineffectiveness of current data regulation that does not oblige the companies to fully and transparently disclose how and when they use, share, or profit from the data. Furthermore, the software tool developed during the research is, we believe, the first that is capable of a comparative analysis of VA privacy with a visual demonstration to increase ease of understanding for the user.

语音控制虚拟助手(VAs)的使用非常重要,用户数量每年都在增加。VAs的广泛使用,为那些现金充裕的大型科技公司提供了另一种消费用户数据的方式,提供了一种利润丰厚的收入来源。虽然这些公司在法律上有义务“公平和负责任地”对待用户的信息,但用于处理数据的人工智能技术已经变得非常复杂,导致用户担心缺乏明确性使其难以理解数据收集和使用的性质和范围。在针对虚拟助理的独立用户意识工具方面开展的工作很少。PrivExtractor是一款针对VA用户的新型基于网络的感知仪表板,旨在纠正数据“处理器”和用户之间的这种理解失衡。为了实现这一目标,它将四家最大的虚拟服务供应商作为案例研究,并提供一个比较功能,检查这四家公司的隐私实践及其对数据保护法的遵守情况。根据这项研究,我们得出的结论是,所研究的公司在很大程度上遵守了法律,正如预期的那样。然而,由于当前数据监管的无效,用户仍然处于不利地位,这些监管并未要求公司充分透明地披露他们如何以及何时使用、分享或从数据中获利。此外,我们认为,在研究期间开发的软件工具是第一个能够通过可视化演示对VA隐私进行比较分析的软件工具,以增加用户的理解难度。
{"title":"PrivExtractor: Toward Redressing the Imbalance of Understanding between Virtual Assistant Users and Vendors","authors":"Tom Bolton, Tooska Dargahi, Sana Belguith, Carsten Maple","doi":"https://dl.acm.org/doi/10.1145/3588770","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3588770","url":null,"abstract":"<p>The use of voice-controlled virtual assistants (VAs) is significant, and user numbers increase every year. Extensive use of VAs has provided the large, cash-rich technology companies who sell them with another way of consuming users’ data, providing a lucrative revenue stream. Whilst these companies are legally obliged to treat users’ information “fairly and responsibly,” artificial intelligence techniques used to process data have become incredibly sophisticated, leading to users’ concerns that a lack of clarity is making it hard to understand the nature and scope of data collection and use.</p><p>There has been little work undertaken on a self-contained user awareness tool targeting VAs. PrivExtractor, a novel web-based awareness dashboard for VA users, intends to redress this imbalance of understanding between the data “processors” and the user. It aims to achieve this using the four largest VA vendors as a case study and providing a comparison function that examines the four companies’ privacy practices and their compliance with data protection law.</p><p>As a result of this research, we conclude that the companies studied are largely compliant with the law, as expected. However, the user remains disadvantaged due to the ineffectiveness of current data regulation that does not oblige the companies to fully and transparently disclose how and when they use, share, or profit from the data. Furthermore, the software tool developed during the research is, we believe, the first that is capable of a comparative analysis of VA privacy with a visual demonstration to increase ease of understanding for the user.</p>","PeriodicalId":56050,"journal":{"name":"ACM Transactions on Privacy and Security","volume":"245 1 1","pages":""},"PeriodicalIF":2.3,"publicationDate":"2023-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138540705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Energy Efficient and Secure Neural Network–based Disease Detection Framework for Mobile Healthcare Network 基于节能和安全神经网络的移动医疗网络疾病检测框架
IF 2.3 4区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-04-15 DOI: https://dl.acm.org/doi/10.1145/3585536
Sona Alex, Dhanaraj K. J., Deepthi P. P.

Adopting mobile healthcare network (MHN) services such as disease detection is fraught with concerns about the security and privacy of the entities involved and the resource restrictions at the Internet of Things (IoT) nodes. Hence, the essential requirements for disease detection services are to (i) produce accurate and fast disease detection without jeopardizing the privacy of health clouds and medical users and (ii) reduce the computational and transmission overhead (energy consumption) of the IoT devices while maintaining the privacy. For privacy preservation of widely used neural network– (NN) based disease detection, existing literature suggests either computationally heavy public key fully homomorphic encryption (FHE), or secure multiparty computation, with a large number of interactions. Hence, the existing privacy-preserving NN schemes are energy consuming and not suitable for resource-constrained IoT nodes in MHN. This work proposes a lightweight, fully homomorphic, symmetric key FHE scheme (SkFhe) to address the issues involved in implementing privacy-preserving NN. Based on SkFhe, widely used non-linear activation functions ReLU and Leaky ReLU are implemented over the encrypted domain. Furthermore, based on the proposed privacy-preserving linear transformation and non-linear activation functions, an energy-efficient, accurate, and privacy-preserving NN is proposed. The proposed scheme guarantees privacy preservation of the health cloud’s NN model and medical user’s data. The experimental analysis demonstrates that the proposed solution dramatically reduces the overhead in communication and computation at the user side compared to the existing schemes. Moreover, the improved energy efficiency at the user is accomplished with reduced diagnosis time without sacrificing classification accuracy.

采用移动医疗网络(MHN)服务(如疾病检测)充满了对相关实体的安全性和隐私性以及物联网(IoT)节点资源限制的担忧。因此,疾病检测服务的本质要求是:(1)在不损害健康云和医疗用户隐私的情况下,进行准确、快速的疾病检测;(2)在保持隐私的同时,减少物联网设备的计算和传输开销(能耗)。对于广泛应用的基于神经网络(NN)的疾病检测的隐私保护,现有文献建议采用计算量大的公钥全同态加密(FHE)或具有大量交互的安全多方计算。因此,现有的隐私保护NN方案能耗大,不适合MHN中资源受限的物联网节点。这项工作提出了一个轻量级的、完全同态的、对称的密钥FHE方案(skfthe)来解决实现隐私保护神经网络所涉及的问题。基于skfthe,在加密域上实现了广泛使用的非线性激活函数ReLU和Leaky ReLU。在此基础上,基于所提出的保护隐私的线性变换和非线性激活函数,提出了一种节能、准确、保护隐私的神经网络。该方案保证了健康云的神经网络模型和医疗用户数据的隐私保护。实验分析表明,与现有方案相比,该方案显著降低了用户端的通信和计算开销。此外,在不牺牲分类精度的情况下,在减少诊断时间的情况下,提高了用户的能源效率。
{"title":"Energy Efficient and Secure Neural Network–based Disease Detection Framework for Mobile Healthcare Network","authors":"Sona Alex, Dhanaraj K. J., Deepthi P. P.","doi":"https://dl.acm.org/doi/10.1145/3585536","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3585536","url":null,"abstract":"<p>Adopting mobile healthcare network (MHN) services such as disease detection is fraught with concerns about the security and privacy of the entities involved and the resource restrictions at the Internet of Things (IoT) nodes. Hence, the essential requirements for disease detection services are to (i) produce accurate and fast disease detection without jeopardizing the privacy of health clouds and medical users and (ii) reduce the computational and transmission overhead (energy consumption) of the IoT devices while maintaining the privacy. For privacy preservation of widely used neural network– (NN) based disease detection, existing literature suggests either computationally heavy public key fully homomorphic encryption (FHE), or secure multiparty computation, with a large number of interactions. Hence, the existing privacy-preserving NN schemes are energy consuming and not suitable for resource-constrained IoT nodes in MHN. This work proposes a lightweight, fully homomorphic, symmetric key FHE scheme (SkFhe) to address the issues involved in implementing privacy-preserving NN. Based on SkFhe, widely used non-linear activation functions ReLU and Leaky ReLU are implemented over the encrypted domain. Furthermore, based on the proposed privacy-preserving linear transformation and non-linear activation functions, an energy-efficient, accurate, and privacy-preserving NN is proposed. The proposed scheme guarantees privacy preservation of the health cloud’s NN model and medical user’s data. The experimental analysis demonstrates that the proposed solution dramatically reduces the overhead in communication and computation at the user side compared to the existing schemes. Moreover, the improved energy efficiency at the user is accomplished with reduced diagnosis time without sacrificing classification accuracy.</p>","PeriodicalId":56050,"journal":{"name":"ACM Transactions on Privacy and Security","volume":"130 1","pages":""},"PeriodicalIF":2.3,"publicationDate":"2023-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138540663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
VulANalyzeR: Explainable Binary Vulnerability Detection with Multi-task Learning and Attentional Graph Convolution 基于多任务学习和注意图卷积的可解释二进制漏洞检测
IF 2.3 4区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-04-14 DOI: https://dl.acm.org/doi/10.1145/3585386
Litao Li, Steven H. H. Ding, Yuan Tian, Benjamin C. M. Fung, Philippe Charland, Weihan Ou, Leo Song, Congwei Chen

Software vulnerabilities have been posing tremendous reliability threats to the general public as well as critical infrastructures, and there have been many studies aiming to detect and mitigate software defects at the binary level. Most of the standard practices leverage both static and dynamic analysis, which have several drawbacks like heavy manual workload and high complexity. Existing deep learning-based solutions not only suffer to capture the complex relationships among different variables from raw binary code but also lack the explainability required for humans to verify, evaluate, and patch the detected bugs.

We propose VulANalyzeR, a deep learning-based model, for automated binary vulnerability detection, Common Weakness Enumeration-type classification, and root cause analysis to enhance safety and security. VulANalyzeR features sequential and topological learning through recurrent units and graph convolution to simulate how a program is executed. The attention mechanism is integrated throughout the model, which shows how different instructions and the corresponding states contribute to the final classification. It also classifies the specific vulnerability type through multi-task learning as this not only provides further explanation but also allows faster patching for zero-day vulnerabilities. We show that VulANalyzeR achieves better performance for vulnerability detection over the state-of-the-art baselines. Additionally, a Common Vulnerability Exposure dataset is used to evaluate real complex vulnerabilities. We conduct case studies to show that VulANalyzeR is able to accurately identify the instructions and basic blocks that cause the vulnerability even without given any prior knowledge related to the locations during the training phase.

软件漏洞已经对公众和关键基础设施造成了巨大的可靠性威胁,并且已经有许多研究旨在二进制级别检测和减轻软件缺陷。大多数标准实践都利用静态和动态分析,它们有一些缺点,比如繁重的手工工作负载和高复杂性。现有的基于深度学习的解决方案不仅难以从原始二进制代码中捕获不同变量之间的复杂关系,而且缺乏人类验证、评估和修补检测到的错误所需的可解释性。我们提出了基于深度学习的VulANalyzeR模型,用于自动二进制漏洞检测、常见弱点枚举类型分类和根本原因分析,以增强安全性和安全性。VulANalyzeR通过循环单元和图卷积进行顺序和拓扑学习,以模拟程序的执行方式。注意机制贯穿整个模型,显示了不同的指令和相应的状态对最终分类的贡献。它还通过多任务学习对特定的漏洞类型进行分类,因为这不仅提供了进一步的解释,而且还允许更快地修补零日漏洞。我们展示了VulANalyzeR在最先进的基线上实现了更好的漏洞检测性能。此外,通用漏洞暴露数据集用于评估真实的复杂漏洞。我们进行案例研究,以表明VulANalyzeR能够准确地识别导致漏洞的指令和基本块,即使在训练阶段没有给出任何与位置相关的先验知识。
{"title":"VulANalyzeR: Explainable Binary Vulnerability Detection with Multi-task Learning and Attentional Graph Convolution","authors":"Litao Li, Steven H. H. Ding, Yuan Tian, Benjamin C. M. Fung, Philippe Charland, Weihan Ou, Leo Song, Congwei Chen","doi":"https://dl.acm.org/doi/10.1145/3585386","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3585386","url":null,"abstract":"<p>Software vulnerabilities have been posing tremendous reliability threats to the general public as well as critical infrastructures, and there have been many studies aiming to detect and mitigate software defects at the binary level. Most of the standard practices leverage both static and dynamic analysis, which have several drawbacks like heavy manual workload and high complexity. Existing deep learning-based solutions not only suffer to capture the complex relationships among different variables from raw binary code but also lack the explainability required for humans to verify, evaluate, and patch the detected bugs. </p><p>We propose VulANalyzeR, a deep learning-based model, for automated binary vulnerability detection, Common Weakness Enumeration-type classification, and root cause analysis to enhance safety and security. VulANalyzeR features sequential and topological learning through recurrent units and graph convolution to simulate how a program is executed. The attention mechanism is integrated throughout the model, which shows how different instructions and the corresponding states contribute to the final classification. It also classifies the specific vulnerability type through multi-task learning as this not only provides further explanation but also allows faster patching for zero-day vulnerabilities. We show that VulANalyzeR achieves better performance for vulnerability detection over the state-of-the-art baselines. Additionally, a Common Vulnerability Exposure dataset is used to evaluate real complex vulnerabilities. We conduct case studies to show that VulANalyzeR is able to accurately identify the instructions and basic blocks that cause the vulnerability even without given any prior knowledge related to the locations during the training phase.</p>","PeriodicalId":56050,"journal":{"name":"ACM Transactions on Privacy and Security","volume":"468 1","pages":""},"PeriodicalIF":2.3,"publicationDate":"2023-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138540655","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Stateful Protocol Composition in Isabelle/HOL Isabelle/HOL中的有状态协议组合
IF 2.3 4区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-04-14 DOI: https://dl.acm.org/doi/10.1145/3577020
Andreas V. Hess, Sebastian A. MÖdersheim, Achim D. Brucker

Communication networks like the Internet form a large distributed system where a huge number of components run in parallel, such as security protocols and distributed web applications. For what concerns security, it is obviously infeasible to verify them all at once as one monolithic entity; rather, one has to verify individual components in isolation.

While many typical components like TLS have been studied intensively, there exists much less research on analyzing and ensuring the security of the composition of security protocols. This is a problem since the composition of systems that are secure in isolation can easily be insecure. The main goal of compositionality is thus a theorem of the form: given a set of components that are already proved secure in isolation and that satisfy a number of easy-to-check conditions, then also their parallel composition is secure. Said conditions should of course also be realistic in practice, or better yet, already be satisfied for many existing components. Another benefit of compositionality is that when one would like to exchange a component with another one, all that is needed is the proof that the new component is secure in isolation and satisfies the composition conditions—without having to re-prove anything about the other components.

This article has three contributions over previous work in parallel compositionality. First, we extend the compositionality paradigm to stateful systems: while previous approaches work only for simple protocols that only have a local session state, our result supports participants who maintain long-term databases that can be sharedamong several protocols. This includes a paradigm for declassification of shared secrets. This result is in fact so general that it also covers many forms of sequential composition as a special case of stateful parallel composition. Second, our compositionality result is formalized and proved in Isabelle/HOL, providing a strong correctness guarantee of our proofs. This also means that one can prove, without gaps, the security of an entire system in Isabelle/HOL, namely the security of components in isolation and the composition conditions, and thus derive the security of the entire system as an Isabelle theorem. For the components one can also make use of our tool PSPSP that can perform automatic proofs for many stateful protocols. Third, for the compositionality conditions we have also implemented an automated check procedure in Isabelle.

像Internet这样的通信网络形成了一个大型分布式系统,其中大量组件并行运行,例如安全协议和分布式web应用程序。出于安全考虑,将它们作为一个整体同时进行验证显然是不可行的;相反,必须孤立地验证各个组件。虽然人们对TLS等许多典型组件进行了深入的研究,但对安全协议组成的安全性分析和保证的研究却很少。这是一个问题,因为孤立安全的系统组成很容易不安全。因此,组合性的主要目标是这样一个定理:给定一组已经被证明是隔离安全的组件,并且满足许多易于检查的条件,那么它们的并行组合也是安全的。当然,上述条件在实践中也应该是现实的,或者更好的是,已经满足了许多现有组件。组合性的另一个好处是,当想要与另一个组件交换一个组件时,所需要做的就是证明新组件是安全隔离的,并且满足组合条件,而不必重新证明其他组件的任何内容。本文在平行组合性方面比以前的工作有三个贡献。首先,我们将组合性范式扩展到有状态系统:虽然以前的方法只适用于只有本地会话状态的简单协议,但我们的结果支持维护可以在多个协议之间共享的长期数据库的参与者。这包括一个解密共享机密的范例。事实上,这个结果是如此普遍,以至于它也涵盖了许多形式的顺序组合,作为有状态并行组合的特殊情况。其次,我们的组合性结果在Isabelle/HOL中得到形式化证明,为我们的证明提供了强有力的正确性保证。这也意味着可以无缺口地证明整个系统在Isabelle/HOL中的安全性,即孤立组件和组合条件的安全性,从而导出整个系统的安全性作为Isabelle定理。对于组件,还可以使用我们的工具PSPSP,它可以对许多有状态协议执行自动证明。第三,对于组合性条件,我们还在Isabelle中实现了一个自动检查过程。
{"title":"Stateful Protocol Composition in Isabelle/HOL","authors":"Andreas V. Hess, Sebastian A. MÖdersheim, Achim D. Brucker","doi":"https://dl.acm.org/doi/10.1145/3577020","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3577020","url":null,"abstract":"<p>Communication networks like the Internet form a large distributed system where a huge number of components run in parallel, such as security protocols and distributed web applications. For what concerns security, it is obviously infeasible to verify them all at once as one monolithic entity; rather, one has to verify individual components in isolation. </p><p>While many typical components like TLS have been studied intensively, there exists much less research on analyzing and ensuring the security of the composition of security protocols. This is a problem since the composition of systems that are secure in isolation can easily be insecure. The main goal of compositionality is thus a theorem of the form: given a set of components that are already proved secure in isolation and that satisfy a number of easy-to-check conditions, then also their parallel composition is secure. Said conditions should of course also be realistic in practice, or better yet, already be satisfied for many existing components. Another benefit of compositionality is that when one would like to exchange a component with another one, all that is needed is the proof that the new component is secure in isolation and satisfies the composition conditions—without having to re-prove anything about the other components. </p><p>This article has three contributions over previous work in parallel compositionality. First, we extend the compositionality paradigm to <i>stateful systems</i>: while previous approaches work only for simple protocols that only have a local session state, our result supports participants who maintain long-term <i>databases</i> that can be <i>shared</i>\u0000among several protocols. This includes a paradigm for <i>declassification of shared secrets</i>. This result is in fact so general that it also covers many forms of <i>sequential composition</i> as a special case of stateful parallel composition. Second, our compositionality result is formalized and proved in Isabelle/HOL, providing a strong correctness guarantee of our proofs. This also means that one can prove, without gaps, the security of an entire system in Isabelle/HOL, namely the security of components in isolation and the composition conditions, and thus derive the security of the entire system as an Isabelle theorem. For the components one can also make use of our tool PSPSP that can perform automatic proofs for many stateful protocols. Third, for the compositionality conditions we have also implemented an automated check procedure in Isabelle.</p>","PeriodicalId":56050,"journal":{"name":"ACM Transactions on Privacy and Security","volume":"21 1","pages":""},"PeriodicalIF":2.3,"publicationDate":"2023-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138540662","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
ACM Transactions on Privacy and Security
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1