首页 > 最新文献

2023 IEEE Security and Privacy Workshops (SPW)最新文献

英文 中文
Whole-Program Privilege and Compartmentalization Analysis with the Object-Encapsulation Model 基于对象封装模型的整个程序权限和划分分析
Pub Date : 2023-05-01 DOI: 10.1109/SPW59333.2023.00018
Yudi Yang, Weijie Huang, Kelly Kaoudis, Nathan Dautenhahn
We present the object-encapsulation model, a low-level program representation and analysis framework that exposes and quantifies privilege within a program. Successfully compartmentalizing an application today requires significant expertise, but is an attractive goal as it reduces connectability of attack vectors in exploit chains. The object-encapsulation model enables understanding how a program can best be compartmentalized without requiring deep knowledge of program internals. We translate a program to a new representation, the Program Capability Graph (PCG), mapping each operation to the code and data objects it may access. We aggregate PCG elements into encapsulated-object groups. The resulting encapsulated-objects PCG enables measuring program interconnectedness and encapsulated-object privileges in order to explore and compare compartmentalization strategies. Our deep dive of parsers reveals they are well encapsulated, requiring access to an average of 545/4902 callable interfaces and 1201/29198 external objects. This means the parsers we evaluate can be easily compartmentalized, applying the encapsulated-objects PCG and our analysis to facilitate automatic or manual trust boundary placement. Overall, the object-encapsulation model provides an essential element to language-level analysis of least-privilege in complex systems to aid codebase understanding and refactoring.
我们提出了对象封装模型,这是一个低级程序表示和分析框架,可以暴露和量化程序中的特权。如今,成功地划分应用程序需要大量的专业知识,但这是一个有吸引力的目标,因为它降低了攻击向量在利用链中的可连接性。对象封装模型允许理解如何最好地划分程序,而不需要深入了解程序内部。我们将程序转换为一种新的表示形式,即程序能力图(PCG),将每个操作映射到它可以访问的代码和数据对象。我们将PCG元素聚合到封装对象组中。由此产生的封装对象PCG可以测量程序互连性和封装对象特权,以便探索和比较分区策略。我们对解析器的深入研究表明,它们封装得很好,平均需要访问545/4902个可调用接口和1201/29198个外部对象。这意味着我们评估的解析器可以很容易地划分,应用封装对象PCG和我们的分析来促进自动或手动信任边界的放置。总之,对象封装模型为复杂系统中最小权限的语言级分析提供了一个基本元素,以帮助理解和重构代码库。
{"title":"Whole-Program Privilege and Compartmentalization Analysis with the Object-Encapsulation Model","authors":"Yudi Yang, Weijie Huang, Kelly Kaoudis, Nathan Dautenhahn","doi":"10.1109/SPW59333.2023.00018","DOIUrl":"https://doi.org/10.1109/SPW59333.2023.00018","url":null,"abstract":"We present the object-encapsulation model, a low-level program representation and analysis framework that exposes and quantifies privilege within a program. Successfully compartmentalizing an application today requires significant expertise, but is an attractive goal as it reduces connectability of attack vectors in exploit chains. The object-encapsulation model enables understanding how a program can best be compartmentalized without requiring deep knowledge of program internals. We translate a program to a new representation, the Program Capability Graph (PCG), mapping each operation to the code and data objects it may access. We aggregate PCG elements into encapsulated-object groups. The resulting encapsulated-objects PCG enables measuring program interconnectedness and encapsulated-object privileges in order to explore and compare compartmentalization strategies. Our deep dive of parsers reveals they are well encapsulated, requiring access to an average of 545/4902 callable interfaces and 1201/29198 external objects. This means the parsers we evaluate can be easily compartmentalized, applying the encapsulated-objects PCG and our analysis to facilitate automatic or manual trust boundary placement. Overall, the object-encapsulation model provides an essential element to language-level analysis of least-privilege in complex systems to aid codebase understanding and refactoring.","PeriodicalId":308378,"journal":{"name":"2023 IEEE Security and Privacy Workshops (SPW)","volume":"179 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126070329","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fuzzing the Latest NTFS in Linux with Papora: An Empirical Study 用Papora模糊Linux中最新NTFS:一个实证研究
Pub Date : 2023-04-14 DOI: 10.1109/SPW59333.2023.00034
E. Lo, Ningyu He, Yuejie Shi, Jiajia Xu, Chiachih Wu, Ding Li, Yao Guo
Recently, the first feature-rich NTFS implementation, NTFS3, has been upstreamed to Linux. Although ensuring the security of NTFS3 is essential for the future of Linux, it remains unclear, however, whether the most recent version of NTFS for Linux contains 0-day vulnerabilities. To this end, we implemented Papora, the first effective fuzzer for NTFS3. We have identified and reported 3 CVE-assigned 0-day vulnerabilities and 9 severe bugs in NTFS3. Furthermore, we have investigated the underlying causes as well as types of these vulnerabilities and bugs. We have conducted an empirical study on the identified bugs while the results of our study have offered practical insights regarding the security of NTFS3 in Linux.
最近,第一个功能丰富的NTFS实现NTFS3已经上传到Linux。尽管确保NTFS3的安全性对Linux的未来至关重要,但目前尚不清楚最新版本的Linux NTFS是否包含零日漏洞。为此,我们实现了Papora,第一个有效的NTFS3模糊器。我们已经在NTFS3中发现并报告了3个cve分配的0天漏洞和9个严重漏洞。此外,我们还调查了这些漏洞和错误的潜在原因以及类型。我们对发现的漏洞进行了实证研究,研究结果为Linux下NTFS3的安全性提供了实际的见解。
{"title":"Fuzzing the Latest NTFS in Linux with Papora: An Empirical Study","authors":"E. Lo, Ningyu He, Yuejie Shi, Jiajia Xu, Chiachih Wu, Ding Li, Yao Guo","doi":"10.1109/SPW59333.2023.00034","DOIUrl":"https://doi.org/10.1109/SPW59333.2023.00034","url":null,"abstract":"Recently, the first feature-rich NTFS implementation, NTFS3, has been upstreamed to Linux. Although ensuring the security of NTFS3 is essential for the future of Linux, it remains unclear, however, whether the most recent version of NTFS for Linux contains 0-day vulnerabilities. To this end, we implemented Papora, the first effective fuzzer for NTFS3. We have identified and reported 3 CVE-assigned 0-day vulnerabilities and 9 severe bugs in NTFS3. Furthermore, we have investigated the underlying causes as well as types of these vulnerabilities and bugs. We have conducted an empirical study on the identified bugs while the results of our study have offered practical insights regarding the security of NTFS3 in Linux.","PeriodicalId":308378,"journal":{"name":"2023 IEEE Security and Privacy Workshops (SPW)","volume":"634 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113996319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On Feasibility of Server-side Backdoor Attacks on Split Learning 基于分裂学习的服务器端后门攻击可行性研究
Pub Date : 2023-02-19 DOI: 10.1109/SPW59333.2023.00014
B. Tajalli, O. Ersoy, S. Picek
Split learning is a collaborative learning design that allows several participants (clients) to train a shared model while keeping their datasets private. In split learning, the network is split into two halves: clients have the initial part until the cut layer, and the remaining part of the network is on the server side. In the training process, clients feed the data into the first part of the network and send the output (smashed data) to the server, which uses it as the input for the remaining part of the network. Recent studies demonstrate that collaborative learning models, specifically federated learning, are vulnerable to security and privacy attacks such as model inference and backdoor attacks. While there have been studies regarding inference attacks on split learning, it has not yet been tested for backdoor attacks. This paper performs a novel backdoor attack on split learning and studies its effectiveness. Despite traditional backdoor attacks done on the client side, we inject the backdoor trigger from the server side. We provide two attack methods: one using a surrogate client and another using an autoencoder to poison the model via incoming smashed data and its outgoing gradient toward the innocent participants. The results show that despite using strong patterns and injection methods, split learning is highly robust and resistant to such poisoning attacks. While we get the attack success rate of 100% as our best result for the MNIST dataset, in most of the other cases, our attack shows little success when increasing the cut layer.
分割学习是一种协作学习设计,允许多个参与者(客户端)训练共享模型,同时保持他们的数据集私有。在分裂学习中,网络被分成两部分:客户端拥有初始部分,直到切割层,网络的其余部分在服务器端。在训练过程中,客户端将数据输入网络的第一部分,并将输出(粉碎数据)发送给服务器,服务器将其用作网络其余部分的输入。最近的研究表明,协作学习模型,特别是联邦学习,容易受到安全和隐私攻击,如模型推理和后门攻击。虽然已有关于分裂学习推理攻击的研究,但尚未对后门攻击进行测试。本文提出了一种针对分裂学习的后门攻击方法,并对其有效性进行了研究。尽管传统的后门攻击是在客户端进行的,但我们从服务器端注入后门触发器。我们提供了两种攻击方法:一种使用代理客户端,另一种使用自动编码器通过传入的破碎数据及其向无辜参与者的输出梯度来毒害模型。结果表明,尽管使用了强模式和注入方法,但分裂学习具有高度鲁棒性和抗此类中毒攻击的能力。虽然我们在MNIST数据集上获得了100%的攻击成功率作为我们的最佳结果,但在大多数其他情况下,当增加切割层时,我们的攻击几乎没有成功。
{"title":"On Feasibility of Server-side Backdoor Attacks on Split Learning","authors":"B. Tajalli, O. Ersoy, S. Picek","doi":"10.1109/SPW59333.2023.00014","DOIUrl":"https://doi.org/10.1109/SPW59333.2023.00014","url":null,"abstract":"Split learning is a collaborative learning design that allows several participants (clients) to train a shared model while keeping their datasets private. In split learning, the network is split into two halves: clients have the initial part until the cut layer, and the remaining part of the network is on the server side. In the training process, clients feed the data into the first part of the network and send the output (smashed data) to the server, which uses it as the input for the remaining part of the network. Recent studies demonstrate that collaborative learning models, specifically federated learning, are vulnerable to security and privacy attacks such as model inference and backdoor attacks. While there have been studies regarding inference attacks on split learning, it has not yet been tested for backdoor attacks. This paper performs a novel backdoor attack on split learning and studies its effectiveness. Despite traditional backdoor attacks done on the client side, we inject the backdoor trigger from the server side. We provide two attack methods: one using a surrogate client and another using an autoencoder to poison the model via incoming smashed data and its outgoing gradient toward the innocent participants. The results show that despite using strong patterns and injection methods, split learning is highly robust and resistant to such poisoning attacks. While we get the attack success rate of 100% as our best result for the MNIST dataset, in most of the other cases, our attack shows little success when increasing the cut layer.","PeriodicalId":308378,"journal":{"name":"2023 IEEE Security and Privacy Workshops (SPW)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122933796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unsupervised clustering of file dialects according to monotonic decompositions of mixtures 基于混合单调分解的文件方言无监督聚类
Pub Date : 2023-02-09 DOI: 10.1109/SPW59333.2023.00019
Michael Robinson, Tate Altman, Denley Lam, Le Li
This paper proposes an unsupervised classification method that partitions a set of files into non-overlapping dialects based upon their behaviors, determined by messages produced by a collection of programs that consume them. The pattern of messages can be used as the signature of a particular kind of behavior, with the understanding that some messages are likely to co-occur, while others are not. We propose a novel definition for a file format dialect, based upon these behavioral signatures. A dialect defines a subset of the possible messages, called the required messages. Once files are conditioned upon a dialect and its required messages, the remaining messages are statistically independent. With this definition in hand, we present a greedy algorithm that deduces candidate dialects from a dataset consisting of a matrix of file-message data, demonstrate its performance on several file formats, and prove conditions under which it is optimal. We show that an analyst needs to consider fewer dialects than distinct message patterns, which reduces their cognitive load when studying a complex format.
本文提出了一种无监督分类方法,该方法根据文件的行为将一组文件划分为不重叠的方言,这些方言由一组使用它们的程序产生的消息决定。消息模式可以用作特定类型行为的签名,要理解有些消息可能同时发生,而另一些则不是。基于这些行为特征,我们提出了一个新的文件格式方言定义。方言定义了可能消息的子集,称为所需消息。一旦文件以方言及其所需的消息为条件,其余的消息在统计上是独立的。有了这个定义,我们提出了一种贪婪算法,该算法从由文件消息数据矩阵组成的数据集中推断出候选方言,演示了它在几种文件格式上的性能,并证明了它是最优的条件。我们表明,分析人员需要考虑的方言比不同的消息模式少,这减少了他们在研究复杂格式时的认知负荷。
{"title":"Unsupervised clustering of file dialects according to monotonic decompositions of mixtures","authors":"Michael Robinson, Tate Altman, Denley Lam, Le Li","doi":"10.1109/SPW59333.2023.00019","DOIUrl":"https://doi.org/10.1109/SPW59333.2023.00019","url":null,"abstract":"This paper proposes an unsupervised classification method that partitions a set of files into non-overlapping dialects based upon their behaviors, determined by messages produced by a collection of programs that consume them. The pattern of messages can be used as the signature of a particular kind of behavior, with the understanding that some messages are likely to co-occur, while others are not. We propose a novel definition for a file format dialect, based upon these behavioral signatures. A dialect defines a subset of the possible messages, called the required messages. Once files are conditioned upon a dialect and its required messages, the remaining messages are statistically independent. With this definition in hand, we present a greedy algorithm that deduces candidate dialects from a dataset consisting of a matrix of file-message data, demonstrate its performance on several file formats, and prove conditions under which it is optimal. We show that an analyst needs to consider fewer dialects than distinct message patterns, which reduces their cognitive load when studying a complex format.","PeriodicalId":308378,"journal":{"name":"2023 IEEE Security and Privacy Workshops (SPW)","volume":"151 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124205653","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Membership Inference Attacks against Diffusion Models 针对扩散模型的隶属推理攻击
Pub Date : 2023-02-07 DOI: 10.1109/SPW59333.2023.00013
Tomoya Matsumoto, Takayuki Miura, Naoto Yanai
Diffusion models have attracted attention in recent years as innovative generative models. In this paper, we investigate whether a diffusion model is resistant to a membership inference attack, which evaluates the privacy leakage of a machine learning model. We primarily discuss the diffusion model from the standpoints of comparison with a generative adversarial network (GAN) as conventional models and hyperparameters unique to the diffusion model, i.e., timesteps, sampling steps, and sampling variances. We conduct extensive experiments with DDIM as a diffusion model and DCGAN as a GAN on the CelebA and CIFAR-10 datasets in both white-box and black-box settings and then show that the diffusion model is comparably resistant to a membership inference attack as GAN. Next, we demonstrate that the impact of timesteps is significant and intermediate steps in a noise schedule are the most vulnerable to the attack. We also found two key insights through further analysis. First, we identify that DDIM is vulnerable to the attack for small sample sizes instead of achieving a lower FID. Second, sampling steps in hyperparameters are important for resistance to the attack, whereas the impact of sampling variances is quite limited.
扩散模型作为一种创新的生成模型近年来备受关注。在本文中,我们研究了扩散模型是否抵抗成员推理攻击,该攻击评估了机器学习模型的隐私泄露。我们主要从与生成对抗网络(GAN)比较的角度来讨论扩散模型,将其作为常规模型和扩散模型特有的超参数,即时间步长、采样步长和采样方差。我们在CelebA和CIFAR-10数据集上以DDIM作为扩散模型和DCGAN作为GAN在白盒和黑盒设置下进行了广泛的实验,然后表明扩散模型与GAN相比具有相当的抗隶属度推理攻击能力。接下来,我们证明了时间步长的影响是显著的,噪声调度中的中间步是最容易受到攻击的。通过进一步分析,我们还发现了两个关键的见解。首先,我们确定DDIM容易受到小样本量的攻击,而不是实现较低的FID。其次,超参数中的采样步骤对于抵抗攻击很重要,而采样方差的影响是相当有限的。
{"title":"Membership Inference Attacks against Diffusion Models","authors":"Tomoya Matsumoto, Takayuki Miura, Naoto Yanai","doi":"10.1109/SPW59333.2023.00013","DOIUrl":"https://doi.org/10.1109/SPW59333.2023.00013","url":null,"abstract":"Diffusion models have attracted attention in recent years as innovative generative models. In this paper, we investigate whether a diffusion model is resistant to a membership inference attack, which evaluates the privacy leakage of a machine learning model. We primarily discuss the diffusion model from the standpoints of comparison with a generative adversarial network (GAN) as conventional models and hyperparameters unique to the diffusion model, i.e., timesteps, sampling steps, and sampling variances. We conduct extensive experiments with DDIM as a diffusion model and DCGAN as a GAN on the CelebA and CIFAR-10 datasets in both white-box and black-box settings and then show that the diffusion model is comparably resistant to a membership inference attack as GAN. Next, we demonstrate that the impact of timesteps is significant and intermediate steps in a noise schedule are the most vulnerable to the attack. We also found two key insights through further analysis. First, we identify that DDIM is vulnerable to the attack for small sample sizes instead of achieving a lower FID. Second, sampling steps in hyperparameters are important for resistance to the attack, whereas the impact of sampling variances is quite limited.","PeriodicalId":308378,"journal":{"name":"2023 IEEE Security and Privacy Workshops (SPW)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130154866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
ROPfuscator: Robust Obfuscation with ROP ROPfuscator: ROP鲁棒混淆
Pub Date : 2020-12-16 DOI: 10.1109/SPW59333.2023.00026
Giulio De Pasquale, Fukutomo Nakanishi, Daniele Ferla, L. Cavallaro
Software obfuscation is crucial in protecting intellectual property in software from reverse engineering attempts. While some obfuscation techniques originate from the obfuscation-reverse engineering arms race, others stem from different research areas, such as binary software exploitation. Return-oriented programming (ROP) became one of the most effective exploitation techniques for memory error vulnerabilities. ROP interferes with our natural perception of a process control flow, inspiring us to repurpose ROP as a robust and effective form of software obfuscation. Although previous work already explores ROP's effectiveness as an obfuscation technique, evolving reverse engineering research raises the need for principled reasoning to understand the strengths and limitations of ROP-based mechanisms against man-at-the-end (MATE) attacks. To this end, we present ROPFuscator, a compiler-driven obfuscation pass based on ROP for any programming language supported by LLVM. We incorporate opaque predicates and constants and a novel instruction hiding technique to withstand sophisticated MATE attacks. More importantly, we introduce a realistic and unified threat model to thoroughly evaluate ROPFuscator and provide principled reasoning on ROP-based obfuscation techniques that answer to code coverage, incurred overhead, correctness, robustness, and practicality challenges. The project's source code is published online to aid further research.
软件混淆是保护软件知识产权免受逆向工程企图的关键。虽然一些混淆技术起源于混淆逆向工程军备竞赛,但其他技术则源于不同的研究领域,例如二进制软件利用。面向返回编程(Return-oriented programming, ROP)已成为利用内存错误漏洞最有效的技术之一。ROP干扰了我们对过程控制流的自然感知,激励我们将ROP重新定位为一种健壮而有效的软件混淆形式。尽管以前的工作已经将ROP作为一种混淆技术进行了探索,但不断发展的逆向工程研究提出了对原则推理的需求,以了解基于ROP的机制对抗终端人员(MATE)攻击的优势和局限性。为此,我们提出了ROPFuscator,这是一个基于ROP的编译器驱动的混淆通道,适用于LLVM支持的任何编程语言。我们结合了不透明的谓词和常量以及一种新的指令隐藏技术来抵御复杂的MATE攻击。更重要的是,我们引入了一个现实和统一的威胁模型来彻底评估ROPFuscator,并提供了基于rop的混淆技术的原则推理,这些技术可以回答代码覆盖、产生的开销、正确性、鲁棒性和实用性挑战。该项目的源代码在网上发布,以帮助进一步的研究。
{"title":"ROPfuscator: Robust Obfuscation with ROP","authors":"Giulio De Pasquale, Fukutomo Nakanishi, Daniele Ferla, L. Cavallaro","doi":"10.1109/SPW59333.2023.00026","DOIUrl":"https://doi.org/10.1109/SPW59333.2023.00026","url":null,"abstract":"Software obfuscation is crucial in protecting intellectual property in software from reverse engineering attempts. While some obfuscation techniques originate from the obfuscation-reverse engineering arms race, others stem from different research areas, such as binary software exploitation. Return-oriented programming (ROP) became one of the most effective exploitation techniques for memory error vulnerabilities. ROP interferes with our natural perception of a process control flow, inspiring us to repurpose ROP as a robust and effective form of software obfuscation. Although previous work already explores ROP's effectiveness as an obfuscation technique, evolving reverse engineering research raises the need for principled reasoning to understand the strengths and limitations of ROP-based mechanisms against man-at-the-end (MATE) attacks. To this end, we present ROPFuscator, a compiler-driven obfuscation pass based on ROP for any programming language supported by LLVM. We incorporate opaque predicates and constants and a novel instruction hiding technique to withstand sophisticated MATE attacks. More importantly, we introduce a realistic and unified threat model to thoroughly evaluate ROPFuscator and provide principled reasoning on ROP-based obfuscation techniques that answer to code coverage, incurred overhead, correctness, robustness, and practicality challenges. The project's source code is published online to aid further research.","PeriodicalId":308378,"journal":{"name":"2023 IEEE Security and Privacy Workshops (SPW)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122534556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2023 IEEE Security and Privacy Workshops (SPW)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1