首页 > 最新文献

ACM Transactions on Privacy and Security最新文献

英文 中文
Euler: Detecting Network Lateral Movement via Scalable Temporal Link Prediction 欧拉:通过可扩展时间链路预测检测网络横向运动
IF 2.3 4区 计算机科学 Q1 Computer Science Pub Date : 2023-06-27 DOI: https://dl.acm.org/doi/10.1145/3588771
Isaiah J. King, H. Howie Huang

Lateral movement is a key stage of system compromise used by advanced persistent threats. Detecting it is no simple task. When network host logs are abstracted into discrete temporal graphs, the problem can be reframed as anomalous edge detection in an evolving network. Research in modern deep graph learning techniques has produced many creative and complicated models for this task. However, as is the case in many machine learning fields, the generality of models is of paramount importance for accuracy and scalability during training and inference. In this article, we propose a formalized approach to this problem with a framework we call Euler. It consists of a model-agnostic graph neural network stacked upon a model-agnostic sequence encoding layer such as a recurrent neural network. Models built according to the Euler framework can easily distribute their graph convolutional layers across multiple machines for large performance improvements. Additionally, we demonstrate that Euler-based models are as good, or better, than every state-of-the-art approach to anomalous link detection and prediction that we tested. As anomaly-based intrusion detection systems, our models efficiently identified anomalous connections between entities with high precision and outperformed all other unsupervised techniques for anomalous lateral movement detection. Additionally, we show that as a piece of a larger anomaly detection pipeline, Euler models perform well enough for use in real-world systems. With more advanced, yet still lightweight, alerting mechanisms ingesting the embeddings produced by Euler models, precision is boosted from 0.243, to 0.986 on real-world network traffic.

横向移动是高级持续性威胁所使用的系统入侵的关键阶段。检测它不是一项简单的任务。当网络主机日志被抽象成离散的时间图时,问题可以被重新定义为一个不断发展的网络中的异常边缘检测。现代深度图学习技术的研究已经为这项任务产生了许多创造性和复杂的模型。然而,与许多机器学习领域的情况一样,模型的通用性对于训练和推理过程中的准确性和可扩展性至关重要。在本文中,我们提出了一种形式化的方法来解决这个问题,我们称之为欧拉框架。它由一个模型不可知的图神经网络叠加在一个模型不可知的序列编码层(如循环神经网络)上组成。根据欧拉框架构建的模型可以轻松地将其图形卷积层分布在多台机器上,从而大大提高性能。此外,我们证明了基于欧拉的模型与我们测试过的每一种最先进的异常链接检测和预测方法一样好,甚至更好。作为基于异常的入侵检测系统,我们的模型可以高效、高精度地识别实体之间的异常连接,并且优于所有其他无监督的异常横向移动检测技术。此外,我们还表明,作为更大的异常检测管道的一部分,欧拉模型在实际系统中表现良好。使用更先进但仍然轻量级的警报机制摄取欧拉模型产生的嵌入,在真实网络流量上的精度从0.243提高到0.986。
{"title":"Euler: Detecting Network Lateral Movement via Scalable Temporal Link Prediction","authors":"Isaiah J. King, H. Howie Huang","doi":"https://dl.acm.org/doi/10.1145/3588771","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3588771","url":null,"abstract":"<p>Lateral movement is a key stage of system compromise used by advanced persistent threats. Detecting it is no simple task. When network host logs are abstracted into discrete temporal graphs, the problem can be reframed as anomalous edge detection in an evolving network. Research in modern deep graph learning techniques has produced many creative and complicated models for this task. However, as is the case in many machine learning fields, the generality of models is of paramount importance for accuracy and scalability during training and inference. In this article, we propose a formalized approach to this problem with a framework we call <span>Euler</span>. It consists of a model-agnostic graph neural network stacked upon a model-agnostic sequence encoding layer such as a recurrent neural network. Models built according to the <span>Euler</span> framework can easily distribute their graph convolutional layers across multiple machines for large performance improvements. Additionally, we demonstrate that <span>Euler</span>-based models are as good, or better, than every state-of-the-art approach to anomalous link detection and prediction that we tested. As anomaly-based intrusion detection systems, our models efficiently identified anomalous connections between entities with high precision and outperformed all other unsupervised techniques for anomalous lateral movement detection. Additionally, we show that as a piece of a larger anomaly detection pipeline, <span>Euler</span> models perform well enough for use in real-world systems. With more advanced, yet still lightweight, alerting mechanisms ingesting the embeddings produced by <span>Euler</span> models, precision is boosted from 0.243, to 0.986 on real-world network traffic.</p>","PeriodicalId":56050,"journal":{"name":"ACM Transactions on Privacy and Security","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2023-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138540671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
End-to-End Security for Distributed Event-driven Enclave Applications on Heterogeneous TEEs 异构tee上分布式事件驱动Enclave应用的端到端安全性
IF 2.3 4区 计算机科学 Q1 Computer Science Pub Date : 2023-06-26 DOI: https://dl.acm.org/doi/10.1145/3592607
Gianluca Scopelliti, Sepideh Pouyanrad, Job Noorman, Fritz Alder, Christoph Baumann, Frank Piessens, Jan Tobias Mühlberg

This article presents an approach to provide strong assurance of the secure execution of distributed event-driven applications on shared infrastructures, while relying on a small Trusted Computing Base. We build upon and extend security primitives provided by Trusted Execution Environments (TEEs) to guarantee authenticity and integrity properties of applications, and to secure control of input and output devices. More specifically, we guarantee that if an output is produced by the application, it was allowed to be produced by the application’s source code based on an authentic trace of inputs.

We present an integrated open-source framework to develop, deploy, and use such applications across heterogeneous TEEs. Beyond authenticity and integrity, our framework optionally provides confidentiality and a notion of availability, and facilitates software development at a high level of abstraction over the platform-specific TEE layer. We support event-driven programming to develop distributed enclave applications in Rust and C for heterogeneous TEE, including Intel SGX, ARM TrustZone, and Sancus.

In this article we discuss the workings of our approach, the extensions we made to the Sancus processor, and the integration of our development model with commercial TEEs. Our evaluation of security and performance aspects show that TEEs, together with our programming model, form a basis for powerful security architectures for dependable systems in domains such as Industrial Control Systems and the Internet of Things, illustrating our framework’s unique suitability for a broad range of use cases which combine cloud processing, mobile and edge devices, and lightweight sensing and actuation.

本文提供了一种方法,可以在依赖于小型可信计算基础的情况下,为共享基础设施上的分布式事件驱动应用程序的安全执行提供强有力的保证。我们基于可信执行环境(tee)提供的安全原语进行构建和扩展,以保证应用程序的真实性和完整性属性,并确保对输入和输出设备的控制安全。更具体地说,我们保证如果一个输出是由应用程序产生的,那么它是允许由应用程序的源代码基于输入的真实跟踪产生的。我们提供了一个集成的开源框架,用于跨异构tee开发、部署和使用此类应用程序。除了真实性和完整性之外,我们的框架还可选地提供机密性和可用性的概念,并在特定于平台的TEE层上促进高层次抽象的软件开发。我们支持事件驱动编程,以Rust和C语言为异构TEE开发分布式enclave应用程序,包括Intel SGX、ARM TrustZone和Sancus。在本文中,我们将讨论我们的方法的工作方式,我们对Sancus处理器所做的扩展,以及我们的开发模型与商业tee的集成。我们对安全和性能方面的评估表明,tee与我们的编程模型一起,构成了工业控制系统和物联网等领域可靠系统的强大安全架构的基础,说明了我们的框架对结合云处理、移动和边缘设备以及轻量级传感和驱动的广泛用例的独特适用性。
{"title":"End-to-End Security for Distributed Event-driven Enclave Applications on Heterogeneous TEEs","authors":"Gianluca Scopelliti, Sepideh Pouyanrad, Job Noorman, Fritz Alder, Christoph Baumann, Frank Piessens, Jan Tobias Mühlberg","doi":"https://dl.acm.org/doi/10.1145/3592607","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3592607","url":null,"abstract":"<p>This article presents an approach to provide strong assurance of the secure execution of distributed event-driven applications on shared infrastructures, while relying on a small Trusted Computing Base. We build upon and extend security primitives provided by Trusted Execution Environments (TEEs) to guarantee authenticity and integrity properties of applications, and to secure control of input and output devices. More specifically, we guarantee that if an output is produced by the application, it was allowed to be produced by the application’s source code based on an authentic trace of inputs.</p><p>We present an integrated open-source framework to develop, deploy, and use such applications across heterogeneous TEEs. Beyond authenticity and integrity, our framework optionally provides confidentiality and a notion of availability, and facilitates software development at a high level of abstraction over the platform-specific TEE layer. We support event-driven programming to develop distributed enclave applications in Rust and C for heterogeneous TEE, including Intel SGX, ARM TrustZone, and Sancus.</p><p>In this article we discuss the workings of our approach, the extensions we made to the Sancus processor, and the integration of our development model with commercial TEEs. Our evaluation of security and performance aspects show that TEEs, together with our programming model, form a basis for powerful security architectures for dependable systems in domains such as Industrial Control Systems and the Internet of Things, illustrating our framework’s unique suitability for a broad range of use cases which combine cloud processing, mobile and edge devices, and lightweight sensing and actuation.</p>","PeriodicalId":56050,"journal":{"name":"ACM Transactions on Privacy and Security","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2023-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138540650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Beyond Gradients: Exploiting Adversarial Priors in Model Inversion Attacks 超越梯度:利用模型反转攻击中的对抗性先验
IF 2.3 4区 计算机科学 Q1 Computer Science Pub Date : 2023-06-26 DOI: https://dl.acm.org/doi/10.1145/3592800
Dmitrii Usynin, Daniel Rueckert, Georgios Kaissis

Collaborative machine learning settings such as federated learning can be susceptible to adversarial interference and attacks. One class of such attacks is termed model inversion attacks, characterised by the adversary reverse-engineering the model into disclosing the training data. Previous implementations of this attack typically only rely on the shared data representations, ignoring the adversarial priors, or require that specific layers are present in the target model, reducing the potential attack surface. In this work, we propose a novel context-agnostic model inversion framework that builds on the foundations of gradient-based inversion attacks, but additionally exploits the features and the style of the data controlled by an in-the-network adversary. Our technique outperforms existing gradient-based approaches both qualitatively and quantitatively across all training settings, showing particular effectiveness against the collaborative medical imaging tasks. Finally, we demonstrate that our method achieves significant success on two downstream tasks: sensitive feature inference and facial recognition spoofing.

协作机器学习设置(如联邦学习)可能容易受到对抗性干扰和攻击。其中一类攻击被称为模型反转攻击,其特征是攻击者对模型进行逆向工程,使其暴露训练数据。这种攻击的先前实现通常只依赖于共享数据表示,而忽略了对抗性先验,或者要求目标模型中存在特定的层,从而减少了潜在的攻击面。在这项工作中,我们提出了一种新的上下文无关模型反演框架,该框架建立在基于梯度的反演攻击的基础上,但还利用了网络内对手控制的数据的特征和风格。我们的技术在所有训练设置的定性和定量上都优于现有的基于梯度的方法,在协作医学成像任务中表现出特别的有效性。最后,我们证明了我们的方法在两个下游任务上取得了显著的成功:敏感特征推理和面部识别欺骗。
{"title":"Beyond Gradients: Exploiting Adversarial Priors in Model Inversion Attacks","authors":"Dmitrii Usynin, Daniel Rueckert, Georgios Kaissis","doi":"https://dl.acm.org/doi/10.1145/3592800","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3592800","url":null,"abstract":"<p>Collaborative machine learning settings such as federated learning can be susceptible to adversarial interference and attacks. One class of such attacks is termed <i>model inversion attacks</i>, characterised by the adversary reverse-engineering the model into disclosing the training data. Previous implementations of this attack typically <i>only</i> rely on the shared data representations, ignoring the adversarial priors, or require that specific layers are present in the target model, reducing the potential attack surface. In this work, we propose a novel context-agnostic model inversion framework that builds on the foundations of gradient-based inversion attacks, but additionally exploits the features and the style of the data controlled by an in-the-network adversary. Our technique outperforms existing gradient-based approaches both qualitatively and quantitatively across all training settings, showing particular effectiveness against the collaborative medical imaging tasks. Finally, we demonstrate that our method achieves significant success on two downstream tasks: sensitive feature inference and facial recognition spoofing.</p>","PeriodicalId":56050,"journal":{"name":"ACM Transactions on Privacy and Security","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2023-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138540668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Multi-User Constrained Pseudorandom Function Security of Generalized GGM Trees for MPC and Hierarchical Wallets 广义GGM树的多用户约束伪随机函数安全性研究
IF 2.3 4区 计算机科学 Q1 Computer Science Pub Date : 2023-06-26 DOI: https://dl.acm.org/doi/10.1145/3592608
Chun Guo, Xiao Wang, Xiang Xie, Yu Yu

Multi-user (mu) security considers large-scale attackers that, given access to a number of cryptosystem instances, attempt to compromise at least one of them. We initiate the study of mu security of the so-called GGM tree that stems from the pseudorandom generator to pseudorandom function transformation of Goldreich, Goldwasser, and Micali, with a goal to provide references for its recently popularized use in applied cryptography. We propose a generalized model for GGM trees and analyze its mu prefix-constrained pseudorandom function security in the random oracle model. Our model allows to derive concrete bounds and improvements for various protocols, and we showcase on the Bitcoin-Improvement-Proposal standard Bip32 hierarchical wallets and function secret sharing protocols. In both scenarios, we propose improvements with better performance and concrete security bounds at the same time. Compared with the state-of-the-art designs, our SHACAL3- and Keccak-p-based Bip32 variants reduce the communication cost of MPC-based implementations by 73.3% to 93.8%, whereas our AES-based function secret sharing substantially improves mu security while reducing computations by 50%.

多用户(mu)安全性考虑的是大规模攻击者,在给定对多个密码系统实例的访问权限后,试图破坏其中至少一个。本文对Goldreich、Goldwasser、Micali等人的伪随机生成器到伪随机函数变换的所谓GGM树的mu安全性进行了初步研究,旨在为其在应用密码学中的普及应用提供参考。提出了一种广义的GGM树模型,并在随机oracle模型中分析了其mu前缀约束伪随机函数的安全性。我们的模型允许推导出各种协议的具体界限和改进,我们展示了比特币改进建议标准Bip32分层钱包和功能秘密共享协议。在这两种情况下,我们同时提出了性能更好和具体安全边界的改进。与最先进的设计相比,我们基于shaal3和keccak -p的Bip32变体将基于mpc的实现的通信成本降低了73.3%至93.8%,而我们基于aes的功能秘密共享大大提高了mu安全性,同时减少了50%的计算量。
{"title":"The Multi-User Constrained Pseudorandom Function Security of Generalized GGM Trees for MPC and Hierarchical Wallets","authors":"Chun Guo, Xiao Wang, Xiang Xie, Yu Yu","doi":"https://dl.acm.org/doi/10.1145/3592608","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3592608","url":null,"abstract":"<p>Multi-user (mu) security considers large-scale attackers that, given access to a number of cryptosystem instances, attempt to compromise at least one of them. We initiate the study of mu security of the so-called GGM tree that stems from the pseudorandom generator to pseudorandom function transformation of Goldreich, Goldwasser, and Micali, with a goal to provide references for its recently popularized use in applied cryptography. We propose a generalized model for GGM trees and analyze its <i>mu prefix-constrained pseudorandom function</i> security in the random oracle model. Our model allows to derive concrete bounds and improvements for various protocols, and we showcase on the Bitcoin-Improvement-Proposal standard <sans-serif>Bip32</sans-serif> hierarchical wallets and function secret sharing protocols. In both scenarios, we propose improvements with better performance and concrete security bounds at the same time. Compared with the state-of-the-art designs, our <sans-serif>SHACAL3</sans-serif>- and <span>Keccak</span>-p-based <sans-serif>Bip32</sans-serif> variants reduce the communication cost of MPC-based implementations by 73.3% to 93.8%, whereas our <sans-serif>AES</sans-serif>-based function secret sharing substantially improves mu security while reducing computations by 50%.</p>","PeriodicalId":56050,"journal":{"name":"ACM Transactions on Privacy and Security","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2023-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138540628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Privacy-preserving Resilient Consensus for Multi-agent Systems in a General Topology Structure 通用拓扑结构下多智能体系统的隐私保护弹性一致性
IF 2.3 4区 计算机科学 Q1 Computer Science Pub Date : 2023-06-26 DOI: https://dl.acm.org/doi/10.1145/3587933
Jian Hou, Jing Wang, Mingyue Zhang, Zhi Jin, Chunlin Wei, Zuohua Ding

Recent advances of consensus control have made it significant in multi-agent systems such as in distributed machine learning, distributed multi-vehicle cooperative systems. However, during its application it is crucial to achieve resilience and privacy; specifically, when there are adversary/faulty nodes in a general topology structure, normal agents can also reach consensus while keeping their actual states unobserved.

In this article, we modify the state-of-the-art Q-consensus algorithm by introducing predefined noise or well-designed cryptography to guarantee the privacy of each agent state. In the former case, we add specified noise on agent state before it is transmitted to the neighbors and then gradually decrease the value of noise so the exact agent state cannot be evaluated. In the latter one, the Paillier cryptosystem is applied for reconstructing reward function in two consecutive interactions between each pair of neighboring agents. Therefore, multi-agent privacy-preserving resilient consensus (MAPPRC) can be achieved in a general topology structure. Moreover, in the modified version, we reconstruct reward function and credibility function so both convergence rate and stability of the system are improved.

The simulation results indicate the algorithms’ tolerance for constant and/or persistent faulty agents as well as their protection of privacy. Compared with the previous studies that consider both resilience and privacy-preserving requirements, the proposed algorithms in this article greatly relax the topological conditions. At the end of the article, to verify the effectiveness of the proposed algorithms, we conduct two sets of experiments, i.e., a smart-car hardware platform consisting of four vehicles and a distributed machine learning platform containing 10 workers and a server.

共识控制的最新进展使其在分布式机器学习、分布式多车辆协作系统等多智能体系统中具有重要意义。然而,在应用过程中,实现弹性和隐私是至关重要的;具体来说,当一般拓扑结构中存在对手/故障节点时,正常代理也可以在保持其实际状态不被观察的情况下达成共识。在本文中,我们通过引入预定义的噪声或精心设计的加密来修改最先进的Q-consensus算法,以保证每个代理状态的隐私性。在前一种情况下,我们在智能体状态传递给邻居之前,在其上加入指定的噪声,然后逐渐减小噪声的值,从而无法评估出智能体的确切状态。在后一种算法中,采用Paillier密码系统重构相邻智能体之间的连续交互中的奖励函数。因此,多智能体隐私保护弹性共识(MAPPRC)可以在一般的拓扑结构中实现。此外,在改进版本中,我们重构了奖励函数和可信度函数,从而提高了系统的收敛速度和稳定性。仿真结果表明了算法对持续故障代理的容忍度以及对隐私的保护。与以往同时考虑弹性和隐私保护要求的研究相比,本文提出的算法大大放宽了拓扑条件。在文章的最后,为了验证所提出算法的有效性,我们进行了两组实验,即由四辆车组成的智能汽车硬件平台和包含10名工人和一台服务器的分布式机器学习平台。
{"title":"Privacy-preserving Resilient Consensus for Multi-agent Systems in a General Topology Structure","authors":"Jian Hou, Jing Wang, Mingyue Zhang, Zhi Jin, Chunlin Wei, Zuohua Ding","doi":"https://dl.acm.org/doi/10.1145/3587933","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3587933","url":null,"abstract":"<p>Recent advances of consensus control have made it significant in multi-agent systems such as in distributed machine learning, distributed multi-vehicle cooperative systems. However, during its application it is crucial to achieve resilience and privacy; specifically, when there are adversary/faulty nodes in a general topology structure, normal agents can also reach consensus while keeping their actual states unobserved.</p><p>In this article, we modify the state-of-the-art Q-consensus algorithm by introducing predefined noise or well-designed cryptography to guarantee the privacy of each agent state. In the former case, we add specified noise on agent state before it is transmitted to the neighbors and then gradually decrease the value of noise so the exact agent state cannot be evaluated. In the latter one, the Paillier cryptosystem is applied for reconstructing reward function in two consecutive interactions between each pair of neighboring agents. Therefore, multi-agent privacy-preserving resilient consensus (MAPPRC) can be achieved in a general topology structure. Moreover, in the modified version, we reconstruct reward function and credibility function so both convergence rate and stability of the system are improved.</p><p>The simulation results indicate the algorithms’ tolerance for constant and/or persistent faulty agents as well as their protection of privacy. Compared with the previous studies that consider both resilience and privacy-preserving requirements, the proposed algorithms in this article greatly relax the topological conditions. At the end of the article, to verify the effectiveness of the proposed algorithms, we conduct two sets of experiments, i.e., a smart-car hardware platform consisting of four vehicles and a distributed machine learning platform containing 10 workers and a server.</p>","PeriodicalId":56050,"journal":{"name":"ACM Transactions on Privacy and Security","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2023-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138540669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Resilience-by-design in Adaptive Multi-agent Traffic Control Systems 自适应多智能体交通控制系统的设计弹性
IF 2.3 4区 计算机科学 Q1 Computer Science Pub Date : 2023-06-26 DOI: https://dl.acm.org/doi/10.1145/3592799
Ranwa Al Mallah, Talal Halabi, Bilal Farooq

Connected and Autonomous Vehicles (CAVs) with their evolving data gathering capabilities will play a significant role in road safety and efficiency applications supported by Intelligent Transport Systems (ITSs), such as Traffic Signal Control (TSC) for urban traffic congestion management. However, their involvement will expand the space of security vulnerabilities and create larger threat vectors. In this article, we perform the first detailed security analysis and implementation of a new cyber-physical attack category carried out by the network of CAVs against Adaptive Multi-Agent Traffic Signal Control (AMATSC), namely, coordinated Sybil attacks, where vehicles with forged or fake identities try to alter the data collected by the AMATSC algorithms to sabotage their decisions. Consequently, a novel, game-theoretic mitigation approach at the application layer is proposed to minimize the impact of such sophisticated data corruption attacks. The devised minimax game model enables the AMATSC algorithm to generate optimal decisions under a suspected attack, improving its resilience. Extensive experimentation is performed on a traffic dataset provided by the city of Montréal under real-world intersection settings to evaluate the attack impact. Our results improved time loss on attacked intersections by approximately 48.9%. Substantial benefits can be gained from the mitigation, yielding more robust adaptive control of traffic across networked intersections.

具有不断发展的数据收集能力的联网和自动驾驶汽车(cav)将在智能交通系统(its)支持的道路安全和效率应用中发挥重要作用,例如用于城市交通拥堵管理的交通信号控制(TSC)。然而,他们的参与将扩大安全漏洞的空间,并产生更大的威胁向量。在本文中,我们对自动驾驶汽车网络针对自适应多智能体交通信号控制(AMATSC)进行的一种新的网络物理攻击类别进行了首次详细的安全分析和实施,即协调Sybil攻击,其中具有伪造或虚假身份的车辆试图改变由AMATSC算法收集的数据以破坏其决策。因此,在应用层提出了一种新颖的博弈论缓解方法,以最大限度地减少此类复杂数据损坏攻击的影响。所设计的极大极小对策模型使AMATSC算法能够在可疑攻击情况下产生最优决策,提高了算法的弹性。广泛的实验进行了交通数据集提供的城市蒙特拉西姆在现实世界的十字路口设置,以评估攻击的影响。我们的结果将受攻击路口的时间损失提高了约48.9%。从缓解中可以获得实质性的好处,产生更强大的跨网络十字路口的交通自适应控制。
{"title":"Resilience-by-design in Adaptive Multi-agent Traffic Control Systems","authors":"Ranwa Al Mallah, Talal Halabi, Bilal Farooq","doi":"https://dl.acm.org/doi/10.1145/3592799","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3592799","url":null,"abstract":"<p>Connected and Autonomous Vehicles (CAVs) with their evolving data gathering capabilities will play a significant role in road safety and efficiency applications supported by Intelligent Transport Systems (ITSs), such as Traffic Signal Control (TSC) for urban traffic congestion management. However, their involvement will expand the space of security vulnerabilities and create larger threat vectors. In this article, we perform the first detailed security analysis and implementation of a new cyber-physical attack category carried out by the network of CAVs against Adaptive Multi-Agent Traffic Signal Control (AMATSC), namely, coordinated Sybil attacks, where vehicles with forged or fake identities try to alter the data collected by the AMATSC algorithms to sabotage their decisions. Consequently, a novel, game-theoretic mitigation approach at the application layer is proposed to minimize the impact of such sophisticated data corruption attacks. The devised minimax game model enables the AMATSC algorithm to generate optimal decisions under a suspected attack, improving its resilience. Extensive experimentation is performed on a traffic dataset provided by the city of Montréal under real-world intersection settings to evaluate the attack impact. Our results improved time loss on attacked intersections by approximately 48.9%. Substantial benefits can be gained from the mitigation, yielding more robust adaptive control of traffic across networked intersections.</p>","PeriodicalId":56050,"journal":{"name":"ACM Transactions on Privacy and Security","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2023-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138540667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Privacy-preserving Decentralized Federated Learning over Time-varying Communication Graph 时变通信图上保护隐私的分散联邦学习
IF 2.3 4区 计算机科学 Q1 Computer Science Pub Date : 2023-06-26 DOI: https://dl.acm.org/doi/10.1145/3591354
Yang Lu, Zhengxin Yu, Neeraj Suri

Establishing how a set of learners can provide privacy-preserving federated learning in a fully decentralized (peer-to-peer, no coordinator) manner is an open problem. We propose the first privacy-preserving consensus-based algorithm for the distributed learners to achieve decentralized global model aggregation in an environment of high mobility, where participating learners and the communication graph between them may vary during the learning process. In particular, whenever the communication graph changes, the Metropolis-Hastings method [69] is applied to update the weighted adjacency matrix based on the current communication topology. In addition, the Shamir’s secret sharing (SSS) scheme [61] is integrated to facilitate privacy in reaching consensus of the global model. The article establishes the correctness and privacy properties of the proposed algorithm. The computational efficiency is evaluated by a simulation built on a federated learning framework with a real-world dataset.

建立一组学习器如何以完全分散(点对点,没有协调器)的方式提供保护隐私的联邦学习是一个开放的问题。本文提出了第一种基于共识的分布式学习算法,用于在高流动性环境下实现分布式全局模型聚合,该环境下参与学习的学习者及其之间的通信图可能在学习过程中发生变化。特别是,当通信图发生变化时,采用Metropolis-Hastings方法[69]根据当前通信拓扑更新加权邻接矩阵。此外,还集成了Shamir秘密共享(SSS)方案[61],以促进隐私达成全球模型的共识。本文建立了该算法的正确性和隐私性。通过建立在具有真实数据集的联邦学习框架上的仿真来评估计算效率。
{"title":"Privacy-preserving Decentralized Federated Learning over Time-varying Communication Graph","authors":"Yang Lu, Zhengxin Yu, Neeraj Suri","doi":"https://dl.acm.org/doi/10.1145/3591354","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3591354","url":null,"abstract":"<p>Establishing how a set of learners can provide privacy-preserving federated learning in a fully decentralized (peer-to-peer, no coordinator) manner is an open problem. We propose the first privacy-preserving consensus-based algorithm for the distributed learners to achieve decentralized global model aggregation in an environment of high mobility, where participating learners and the communication graph between them may vary during the learning process. In particular, whenever the communication graph changes, the Metropolis-Hastings method [69] is applied to update the weighted adjacency matrix based on the current communication topology. In addition, the Shamir’s secret sharing (SSS) scheme [61] is integrated to facilitate privacy in reaching consensus of the global model. The article establishes the correctness and privacy properties of the proposed algorithm. The computational efficiency is evaluated by a simulation built on a federated learning framework with a real-world dataset.</p>","PeriodicalId":56050,"journal":{"name":"ACM Transactions on Privacy and Security","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2023-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138540657","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
B3: Backdoor Attacks Against Black-Box Machine Learning Models B3:针对黑盒机器学习模型的后门攻击
IF 2.3 4区 计算机科学 Q1 Computer Science Pub Date : 2023-06-22 DOI: 10.1145/3605212
Xueluan Gong, Yanjiao Chen, Wenbin Yang, Huayang Huang, Qian Wang
Backdoor attacks aim to inject backdoors to victim machine learning models during training time, such that the backdoored model maintains the prediction power of the original model towards clean inputs and misbehaves towards backdoored inputs with the trigger. The reason for backdoor attacks is that resource-limited users usually download sophisticated models from model zoos or query the models from MLaaS rather than training a model from scratch, thus a malicious third party has a chance to provide a backdoored model. In general, the more precious the model provided (i.e., models trained on rare datasets), the more popular it is with users. In this paper, from a malicious model provider perspective, we propose a black-box backdoor attack, named B3, where neither the rare victim model (including the model architecture, parameters, and hyperparameters) nor the training data is available to the adversary. To facilitate backdoor attacks in the black-box scenario, we design a cost-effective model extraction method that leverages a carefully-constructed query dataset to steal the functionality of the victim model with a limited budget. As the trigger is key to successful backdoor attacks, we develop a novel trigger generation algorithm that intensifies the bond between the trigger and the targeted misclassification label through the neuron with the highest impact on the targeted label. Extensive experiments have been conducted on various simulated deep learning models and the commercial API of Alibaba Cloud Compute Service. We demonstrate that B3 has a high attack success rate and maintains high prediction accuracy for benign inputs. It is also shown that B3 is robust against state-of-the-art defense strategies against backdoor attacks, such as model pruning and NC.
后门攻击旨在在训练时间内向受害者机器学习模型注入后门,使得后门模型保持原始模型对干净输入的预测能力,并使用触发器对后门输入表现不佳。后门攻击的原因是,资源有限的用户通常从模型动物园下载复杂的模型或从MLaaS查询模型,而不是从头开始训练模型,因此恶意的第三方有机会提供后门模型。一般来说,所提供的模型(即在罕见数据集上训练的模型)越珍贵,就越受用户欢迎。在本文中,从恶意模型提供商的角度来看,我们提出了一种名为B3的黑匣子后门攻击,其中罕见的受害者模型(包括模型架构、参数和超参数)和训练数据都不可用于对手。为了促进黑匣子场景中的后门攻击,我们设计了一种经济高效的模型提取方法,该方法利用精心构建的查询数据集,以有限的预算窃取受害者模型的功能。由于触发器是成功后门攻击的关键,我们开发了一种新的触发器生成算法,该算法通过对目标标签影响最大的神经元来增强触发器和目标错误分类标签之间的联系。在各种模拟深度学习模型和阿里云计算服务的商业API上进行了广泛的实验。我们证明B3具有高的攻击成功率,并对良性输入保持高的预测精度。研究还表明,B3对最先进的后门攻击防御策略(如模型修剪和NC)具有鲁棒性。
{"title":"B3: Backdoor Attacks Against Black-Box Machine Learning Models","authors":"Xueluan Gong, Yanjiao Chen, Wenbin Yang, Huayang Huang, Qian Wang","doi":"10.1145/3605212","DOIUrl":"https://doi.org/10.1145/3605212","url":null,"abstract":"Backdoor attacks aim to inject backdoors to victim machine learning models during training time, such that the backdoored model maintains the prediction power of the original model towards clean inputs and misbehaves towards backdoored inputs with the trigger. The reason for backdoor attacks is that resource-limited users usually download sophisticated models from model zoos or query the models from MLaaS rather than training a model from scratch, thus a malicious third party has a chance to provide a backdoored model. In general, the more precious the model provided (i.e., models trained on rare datasets), the more popular it is with users. In this paper, from a malicious model provider perspective, we propose a black-box backdoor attack, named B3, where neither the rare victim model (including the model architecture, parameters, and hyperparameters) nor the training data is available to the adversary. To facilitate backdoor attacks in the black-box scenario, we design a cost-effective model extraction method that leverages a carefully-constructed query dataset to steal the functionality of the victim model with a limited budget. As the trigger is key to successful backdoor attacks, we develop a novel trigger generation algorithm that intensifies the bond between the trigger and the targeted misclassification label through the neuron with the highest impact on the targeted label. Extensive experiments have been conducted on various simulated deep learning models and the commercial API of Alibaba Cloud Compute Service. We demonstrate that B3 has a high attack success rate and maintains high prediction accuracy for benign inputs. It is also shown that B3 is robust against state-of-the-art defense strategies against backdoor attacks, such as model pruning and NC.","PeriodicalId":56050,"journal":{"name":"ACM Transactions on Privacy and Security","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2023-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41773668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
B3: Backdoor Attacks Against Black-Box Machine Learning Models B3:针对黑盒机器学习模型的后门攻击
IF 2.3 4区 计算机科学 Q1 Computer Science Pub Date : 2023-06-22 DOI: https://dl.acm.org/doi/10.1145/3605212
Xueluan Gong, Yanjiao Chen, Wenbin Yang, Huayang Huang, Qian Wang

Backdoor attacks aim to inject backdoors to victim machine learning models during training time, such that the backdoored model maintains the prediction power of the original model towards clean inputs and misbehaves towards backdoored inputs with the trigger. The reason for backdoor attacks is that resource-limited users usually download sophisticated models from model zoos or query the models from MLaaS rather than training a model from scratch, thus a malicious third party has a chance to provide a backdoored model. In general, the more precious the model provided (i.e., models trained on rare datasets), the more popular it is with users.

In this paper, from a malicious model provider perspective, we propose a black-box backdoor attack, named B3, where neither the rare victim model (including the model architecture, parameters, and hyperparameters) nor the training data is available to the adversary. To facilitate backdoor attacks in the black-box scenario, we design a cost-effective model extraction method that leverages a carefully-constructed query dataset to steal the functionality of the victim model with a limited budget. As the trigger is key to successful backdoor attacks, we develop a novel trigger generation algorithm that intensifies the bond between the trigger and the targeted misclassification label through the neuron with the highest impact on the targeted label. Extensive experiments have been conducted on various simulated deep learning models and the commercial API of Alibaba Cloud Compute Service. We demonstrate that B3 has a high attack success rate and maintains high prediction accuracy for benign inputs. It is also shown that B3 is robust against state-of-the-art defense strategies against backdoor attacks, such as model pruning and NC.

后门攻击的目的是在训练期间向受害机器学习模型注入后门,使后门模型保持原始模型对干净输入的预测能力,并在触发后对后门输入行为不端的行为。后门攻击的原因是资源有限的用户通常从模型动物园下载复杂的模型或从MLaaS查询模型,而不是从头开始训练模型,因此恶意第三方有机会提供后门模型。一般来说,提供的模型越珍贵(即在稀有数据集上训练的模型),就越受用户欢迎。在本文中,从恶意模型提供者的角度出发,我们提出了一种名为B3的黑盒后门攻击,在这种攻击中,攻击者既无法获得罕见的受害者模型(包括模型架构、参数和超参数),也无法获得训练数据。为了方便黑箱场景中的后门攻击,我们设计了一种经济高效的模型提取方法,该方法利用精心构建的查询数据集在有限的预算下窃取受害者模型的功能。由于触发器是成功后门攻击的关键,我们开发了一种新的触发器生成算法,该算法通过对目标标签影响最大的神经元来加强触发器与目标错误分类标签之间的联系。在各种模拟深度学习模型和阿里云计算服务的商业API上进行了大量的实验。我们证明了B3具有较高的攻击成功率,并且对良性输入保持较高的预测精度。研究还表明,B3对于最先进的针对后门攻击的防御策略(如模型修剪和NC)具有鲁棒性。
{"title":"B3: Backdoor Attacks Against Black-Box Machine Learning Models","authors":"Xueluan Gong, Yanjiao Chen, Wenbin Yang, Huayang Huang, Qian Wang","doi":"https://dl.acm.org/doi/10.1145/3605212","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3605212","url":null,"abstract":"<p>Backdoor attacks aim to inject backdoors to victim machine learning models during training time, such that the backdoored model maintains the prediction power of the original model towards clean inputs and misbehaves towards backdoored inputs with the trigger. The reason for backdoor attacks is that resource-limited users usually download sophisticated models from model zoos or query the models from MLaaS rather than training a model from scratch, thus a malicious third party has a chance to provide a backdoored model. In general, the more precious the model provided (i.e., models trained on rare datasets), the more popular it is with users. </p><p>In this paper, from a malicious model provider perspective, we propose a black-box backdoor attack, named <span>B<sup>3</sup></span>, where neither the rare victim model (including the model architecture, parameters, and hyperparameters) nor the training data is available to the adversary. To facilitate backdoor attacks in the black-box scenario, we design a cost-effective <i>model extraction</i> method that leverages a carefully-constructed query dataset to steal the functionality of the victim model with a limited budget. As the trigger is key to successful backdoor attacks, we develop a novel trigger generation algorithm that intensifies the bond between the trigger and the targeted misclassification label through the neuron with the highest impact on the targeted label. Extensive experiments have been conducted on various simulated deep learning models and the commercial API of Alibaba Cloud Compute Service. We demonstrate that <span>B<sup>3</sup></span> has a high attack success rate and maintains high prediction accuracy for benign inputs. It is also shown that <span>B<sup>3</sup></span> is robust against state-of-the-art defense strategies against backdoor attacks, such as model pruning and NC.</p>","PeriodicalId":56050,"journal":{"name":"ACM Transactions on Privacy and Security","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2023-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138540704","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Costs and Benefits of Authentication Advice 认证通知的成本和收益
IF 2.3 4区 计算机科学 Q1 Computer Science Pub Date : 2023-05-13 DOI: https://dl.acm.org/doi/10.1145/3588031
Hazel Murray, David Malone

Authentication security advice is given with the goal of guiding users and organisations towards secure actions and practices. In this article, a taxonomy of 270 pieces of authentication advice is created, and a survey is conducted to gather information on the costs associated with following or enforcing the advice. Our findings indicate that security advice can be ambiguous and contradictory, with 41% of the advice collected being contradicted by another source. Additionally, users reported high levels of frustration with the advice and identified high usability costs. The study also found that end-users disagreed with each other 71% of the time about whether a piece of advice was valuable or not. We define a formal approach to identifying security benefits of advice. Our research suggests that cost-benefit analysis is essential in understanding the value of enforcing security policies. Furthermore, we find that organisation investment in security seems to have better payoffs than mechanisms with high costs to users.

提供身份验证安全建议的目的是指导用户和组织采取安全行动和实践。在本文中,创建了一个包含270条身份验证通知的分类法,并进行了一项调查,以收集与遵循或执行该通知相关的成本信息。我们的研究结果表明,安全建议可能是模棱两可和矛盾的,收集到的建议中有41%与其他来源相矛盾。此外,用户报告了对建议的高度挫败感,并确定了高可用性成本。该研究还发现,71%的最终用户对某条建议是否有价值持不同意见。我们定义了一种正式的方法来确定通知的安全性好处。我们的研究表明,成本效益分析对于理解执行安全策略的价值至关重要。此外,我们发现组织在安全方面的投资似乎比用户成本高的机制有更好的回报。
{"title":"Costs and Benefits of Authentication Advice","authors":"Hazel Murray, David Malone","doi":"https://dl.acm.org/doi/10.1145/3588031","DOIUrl":"https://doi.org/https://dl.acm.org/doi/10.1145/3588031","url":null,"abstract":"<p>Authentication security advice is given with the goal of guiding users and organisations towards secure actions and practices. In this article, a taxonomy of 270 pieces of authentication advice is created, and a survey is conducted to gather information on the costs associated with following or enforcing the advice. Our findings indicate that security advice can be ambiguous and contradictory, with 41% of the advice collected being contradicted by another source. Additionally, users reported high levels of frustration with the advice and identified high usability costs. The study also found that end-users disagreed with each other 71% of the time about whether a piece of advice was valuable or not. We define a formal approach to identifying security benefits of advice. Our research suggests that cost-benefit analysis is essential in understanding the value of enforcing security policies. Furthermore, we find that organisation investment in security seems to have better payoffs than mechanisms with high costs to users.</p>","PeriodicalId":56050,"journal":{"name":"ACM Transactions on Privacy and Security","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2023-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138540666","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
ACM Transactions on Privacy and Security
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1