首页 > 最新文献

2021 IEEE Symposium on Security and Privacy (SP)最新文献

英文 中文
Revealer: Detecting and Exploiting Regular Expression Denial-of-Service Vulnerabilities 揭示器:检测和利用正则表达式拒绝服务漏洞
Pub Date : 2021-05-01 DOI: 10.1109/SP40001.2021.00062
Yinxi Liu, Mingxue Zhang, W. Meng
Regular expression Denial-of-Service (ReDoS) is a class of algorithmic complexity attacks. Attackers can craft particular strings to trigger the worst-case super-linear matching time of some vulnerable regular expressions (regex) with extended features that are commonly supported by popular programming languages. ReDoS attacks can severely degrade the performance of web applications, which extensively employ regexes in their server-side logic. Nevertheless, the characteristics of vulnerable regexes with extended features remain understudied, making it difficult to mitigate or even detect such vulnerabilities.In this paper, we aim to model vulnerable regex patterns generated by popular regex engines and craft attack strings accordingly. Our characterization fully supports the analysis of regexes with any extended feature. We develop Revealer to detect vulnerable structures presented in any given regex and generate attack strings to exploit the corresponding vulnerabilities. Revealer takes a hybrid approach. It first statically locates potential vulnerable structures of a regex, then dynamically verifies whether the vulnerabilities can be triggered or not, and finally crafts attack strings that can lead to recursive backtracking. By combining both static analysis and dynamic analysis, Revealer can accurately and efficiently generate exploits in a limited amount of time. It can further offer mitigation suggestions based on the structural information it identifies.We implemented a prototype of Revealer for Java. We evaluated Revealer over a dataset with 29,088 regexes, and compared it with three state-of-the-art tools. The evaluation shows that Revealer considerably outperformed all the existing tools—Revealer can detect all 237 vulnerabilities that can be detected by any other tool, find 213 new vulnerabilities, and beat the best tool by 140.64%. We further demonstrate that Revealer successfully detected 45 vulnerable regexes in popular real-world applications. Our evaluation demonstrates that Revealer is both effective and efficient in detecting and exploiting ReDoS vulnerabilities.
正则表达式拒绝服务(ReDoS)是一类算法复杂度攻击。攻击者可以制作特定的字符串来触发一些易受攻击的正则表达式(regex)的最坏情况超线性匹配时间,这些正则表达式具有流行编程语言通常支持的扩展功能。ReDoS攻击会严重降低web应用程序的性能,这些应用程序在其服务器端逻辑中广泛使用正则表达式。然而,具有扩展特征的脆弱正则表达式的特征仍然没有得到充分的研究,这使得很难减轻甚至检测到此类漏洞。在本文中,我们的目标是建模由流行的正则表达式引擎生成的易受攻击的正则表达式模式,并相应地制作攻击字符串。我们的特性完全支持对任何扩展特性的正则表达式进行分析。我们开发了Revealer来检测任何给定正则表达式中存在的脆弱结构,并生成攻击字符串来利用相应的漏洞。Revealer采取了一种混合的方法。它首先静态地定位regex的潜在易受攻击的结构,然后动态地验证是否可以触发漏洞,最后制作可能导致递归回溯的攻击字符串。通过结合静态分析和动态分析,Revealer可以在有限的时间内准确有效地生成漏洞。它可以根据所识别的结构信息进一步提供缓解建议。我们为Java实现了一个Revealer的原型。我们在包含29,088个正则表达式的数据集上评估了Revealer,并将其与三种最先进的工具进行了比较。评估结果表明,Revealer的性能明显优于所有现有的工具——Revealer可以检测到所有其他工具可以检测到的237个漏洞,并发现213个新漏洞,比最佳工具高出140.64%。我们进一步证明,在现实世界中流行的应用程序中,Revealer成功地检测到了45个易受攻击的正则表达式。我们的评估表明,reveal在检测和利用ReDoS漏洞方面既有效又高效。
{"title":"Revealer: Detecting and Exploiting Regular Expression Denial-of-Service Vulnerabilities","authors":"Yinxi Liu, Mingxue Zhang, W. Meng","doi":"10.1109/SP40001.2021.00062","DOIUrl":"https://doi.org/10.1109/SP40001.2021.00062","url":null,"abstract":"Regular expression Denial-of-Service (ReDoS) is a class of algorithmic complexity attacks. Attackers can craft particular strings to trigger the worst-case super-linear matching time of some vulnerable regular expressions (regex) with extended features that are commonly supported by popular programming languages. ReDoS attacks can severely degrade the performance of web applications, which extensively employ regexes in their server-side logic. Nevertheless, the characteristics of vulnerable regexes with extended features remain understudied, making it difficult to mitigate or even detect such vulnerabilities.In this paper, we aim to model vulnerable regex patterns generated by popular regex engines and craft attack strings accordingly. Our characterization fully supports the analysis of regexes with any extended feature. We develop Revealer to detect vulnerable structures presented in any given regex and generate attack strings to exploit the corresponding vulnerabilities. Revealer takes a hybrid approach. It first statically locates potential vulnerable structures of a regex, then dynamically verifies whether the vulnerabilities can be triggered or not, and finally crafts attack strings that can lead to recursive backtracking. By combining both static analysis and dynamic analysis, Revealer can accurately and efficiently generate exploits in a limited amount of time. It can further offer mitigation suggestions based on the structural information it identifies.We implemented a prototype of Revealer for Java. We evaluated Revealer over a dataset with 29,088 regexes, and compared it with three state-of-the-art tools. The evaluation shows that Revealer considerably outperformed all the existing tools—Revealer can detect all 237 vulnerabilities that can be detected by any other tool, find 213 new vulnerabilities, and beat the best tool by 140.64%. We further demonstrate that Revealer successfully detected 45 vulnerable regexes in popular real-world applications. Our evaluation demonstrates that Revealer is both effective and efficient in detecting and exploiting ReDoS vulnerabilities.","PeriodicalId":6786,"journal":{"name":"2021 IEEE Symposium on Security and Privacy (SP)","volume":"69 1","pages":"1468-1484"},"PeriodicalIF":0.0,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84354086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
One Engine to Fuzz ’em All: Generic Language Processor Testing with Semantic Validation 一个引擎来模糊它们:通用语言处理器测试与语义验证
Pub Date : 2021-05-01 DOI: 10.1109/SP40001.2021.00071
Yongheng Chen, Rui Zhong, Hong Hu, Hangfan Zhang, Yupeng Yang, Dinghao Wu, Wenke Lee
Language processors, such as compilers and interpreters, are indispensable in building modern software. Errors in language processors can lead to severe consequences, like incorrect functionalities or even malicious attacks. However, it is not trivial to automatically test language processors to find bugs. Existing testing methods (or fuzzers) either fail to generate high-quality (i.e., semantically correct) test cases, or only support limited programming languages.In this paper, we propose POLYGLOT, a generic fuzzing framework that generates high-quality test cases for exploring processors of different programming languages. To achieve the generic applicability, POLYGLOT neutralizes the difference in syntax and semantics of programming languages with a uniform intermediate representation (IR). To improve the language validity, POLYGLOT performs constrained mutation and semantic validation to preserve syntactic correctness and fix semantic errors. We have applied POLYGLOT on 21 popular language processors of 9 programming languages, and identified 173 new bugs, 113 of which are fixed with 18 CVEs assigned. Our experiments show that POLYGLOT can support a wide range of programming languages, and outperforms existing fuzzers with up to 30× improvement in code coverage.
语言处理器,如编译器和解释器,在构建现代软件中是不可或缺的。语言处理器中的错误可能导致严重的后果,比如不正确的功能,甚至是恶意攻击。然而,自动测试语言处理器以发现错误并非易事。现有的测试方法(或fuzzers)要么无法生成高质量的(即,语义正确的)测试用例,要么只支持有限的编程语言。在本文中,我们提出了POLYGLOT,这是一个通用的模糊测试框架,可以为探索不同编程语言的处理器生成高质量的测试用例。为了实现通用的适用性,POLYGLOT通过统一的中间表示(IR)消除了编程语言在语法和语义上的差异。为了提高语言的有效性,POLYGLOT执行了约束突变和语义验证,以保持语法正确性和修复语义错误。我们将POLYGLOT应用于9种编程语言的21种流行语言处理器上,发现了173个新bug,修复了113个bug,分配了18个cve。我们的实验表明,POLYGLOT可以支持广泛的编程语言,并且在代码覆盖率方面比现有的fuzzers提高了30倍。
{"title":"One Engine to Fuzz ’em All: Generic Language Processor Testing with Semantic Validation","authors":"Yongheng Chen, Rui Zhong, Hong Hu, Hangfan Zhang, Yupeng Yang, Dinghao Wu, Wenke Lee","doi":"10.1109/SP40001.2021.00071","DOIUrl":"https://doi.org/10.1109/SP40001.2021.00071","url":null,"abstract":"Language processors, such as compilers and interpreters, are indispensable in building modern software. Errors in language processors can lead to severe consequences, like incorrect functionalities or even malicious attacks. However, it is not trivial to automatically test language processors to find bugs. Existing testing methods (or fuzzers) either fail to generate high-quality (i.e., semantically correct) test cases, or only support limited programming languages.In this paper, we propose POLYGLOT, a generic fuzzing framework that generates high-quality test cases for exploring processors of different programming languages. To achieve the generic applicability, POLYGLOT neutralizes the difference in syntax and semantics of programming languages with a uniform intermediate representation (IR). To improve the language validity, POLYGLOT performs constrained mutation and semantic validation to preserve syntactic correctness and fix semantic errors. We have applied POLYGLOT on 21 popular language processors of 9 programming languages, and identified 173 new bugs, 113 of which are fixed with 18 CVEs assigned. Our experiments show that POLYGLOT can support a wide range of programming languages, and outperforms existing fuzzers with up to 30× improvement in code coverage.","PeriodicalId":6786,"journal":{"name":"2021 IEEE Symposium on Security and Privacy (SP)","volume":"39 1","pages":"642-658"},"PeriodicalIF":0.0,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85487216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 31
Journey to the Center of the Cookie Ecosystem: Unraveling Actors' Roles and Relationships 饼干生态系统的中心之旅:解开演员的角色和关系
Pub Date : 2021-05-01 DOI: 10.1109/SP40001.2021.9796062
Iskander Sánchez-Rola, Matteo Dell'Amico, D. Balzarotti, Pierre-Antoine Vervier, Leyla Bilge
Web pages have been steadily increasing in complexity over time, including code snippets from several distinct origins and organizations. While this may be a known phenomenon, its implications on the panorama of cookie tracking received little attention until now. Our study focuses on filling this gap, through the analysis of crawl results that are both large-scale and fine-grained, encompassing the whole set of events that lead to the creation and sharing of around 138 million cookies from crawling more than 6 million webpages. Our analysis lets us paint a highly detailed picture of the cookie ecosystem, discovering an intricate network of connections between players that reciprocally exchange information and include each other's content in web pages whose owners may not even be aware. We discover that, in most webpages, tracking cookies are set and shared by organizations at the end of complex chains that involve several middlemen. We also study the impact of cookie ghostwriting, i.e., a common practice where an entity creates cookies in the name of another party, or the webpage. We attribute and define a set of roles in the cookie ecosystem, related to cookie creation and sharing. We see that organizations can and do follow different patterns, including behaviors that previous studies could not uncover: for example, many cookie ghostwriters send cookies they create to themselves, which makes them able to perform cross-site tracking even for users that deleted third-party cookies in their browsers. While some organizations concentrate the flow of information on themselves, others behave as dispatchers, allowing other organizations to perform tracking on the pages that include their content.
随着时间的推移,Web页面的复杂性一直在稳步增长,包括来自几个不同来源和组织的代码片段。虽然这可能是一种已知的现象,但直到现在,它对cookie跟踪全景的影响还很少受到关注。我们的研究重点是填补这一空白,通过分析大规模和细粒度的抓取结果,包括从抓取超过600万个网页中产生和共享约1.38亿个cookie的整个事件集。我们的分析让我们描绘了一幅非常详细的cookie生态系统的画面,发现了玩家之间相互交换信息的复杂连接网络,并将彼此的内容包含在所有者甚至不知道的网页中。我们发现,在大多数网页中,跟踪cookie是由位于涉及多个中间商的复杂链末端的组织设置和共享的。我们还研究了cookie代写的影响,即一个实体以另一方或网页的名义创建cookie的常见做法。我们在cookie生态系统中定义了一系列与cookie创建和共享相关的角色。我们发现组织可以并且确实遵循不同的模式,包括以前的研究无法揭示的行为:例如,许多cookie代笔者将他们自己创建的cookie发送给自己,这使得他们能够执行跨站点跟踪,甚至对于在浏览器中删除第三方cookie的用户。虽然一些组织将信息流集中在自己身上,但其他组织充当调度员,允许其他组织在包含其内容的页面上执行跟踪。
{"title":"Journey to the Center of the Cookie Ecosystem: Unraveling Actors' Roles and Relationships","authors":"Iskander Sánchez-Rola, Matteo Dell'Amico, D. Balzarotti, Pierre-Antoine Vervier, Leyla Bilge","doi":"10.1109/SP40001.2021.9796062","DOIUrl":"https://doi.org/10.1109/SP40001.2021.9796062","url":null,"abstract":"Web pages have been steadily increasing in complexity over time, including code snippets from several distinct origins and organizations. While this may be a known phenomenon, its implications on the panorama of cookie tracking received little attention until now. Our study focuses on filling this gap, through the analysis of crawl results that are both large-scale and fine-grained, encompassing the whole set of events that lead to the creation and sharing of around 138 million cookies from crawling more than 6 million webpages. Our analysis lets us paint a highly detailed picture of the cookie ecosystem, discovering an intricate network of connections between players that reciprocally exchange information and include each other's content in web pages whose owners may not even be aware. We discover that, in most webpages, tracking cookies are set and shared by organizations at the end of complex chains that involve several middlemen. We also study the impact of cookie ghostwriting, i.e., a common practice where an entity creates cookies in the name of another party, or the webpage. We attribute and define a set of roles in the cookie ecosystem, related to cookie creation and sharing. We see that organizations can and do follow different patterns, including behaviors that previous studies could not uncover: for example, many cookie ghostwriters send cookies they create to themselves, which makes them able to perform cross-site tracking even for users that deleted third-party cookies in their browsers. While some organizations concentrate the flow of information on themselves, others behave as dispatchers, allowing other organizations to perform tracking on the pages that include their content.","PeriodicalId":6786,"journal":{"name":"2021 IEEE Symposium on Security and Privacy (SP)","volume":"223 1","pages":"1990-2004"},"PeriodicalIF":0.0,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85556746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
OSPREY: Recovery of Variable and Data Structure via Probabilistic Analysis for Stripped Binary OSPREY:通过剥离二进制的概率分析恢复变量和数据结构
Pub Date : 2021-05-01 DOI: 10.1109/SP40001.2021.00051
Zhuo Zhang, Yapeng Ye, Wei You, Guanhong Tao, Wen-Chuan Lee, Yonghwi Kwon, Yousra Aafer, X. Zhang
Recovering variables and data structure information from stripped binary is a prominent challenge in binary program analysis. While various state-of-the-art techniques are effective in specific settings, such effectiveness may not generalize. This is mainly because the problem is inherently uncertain due to the information loss in compilation. Most existing techniques are deterministic and lack a systematic way of handling such uncertainty. We propose a novel probabilistic technique for variable and structure recovery. Random variables are introduced to denote the likelihood of an abstract memory location having various types and structural properties such as being a field of some data structure. These random variables are connected through probabilistic constraints derived through program analysis. Solving these constraints produces the posterior probabilities of the random variables, which essentially denote the recovery results. Our experiments show that our technique substantially outperforms a number of state-of-the-art systems, including IDA, Ghidra, Angr, and Howard. Our case studies demonstrate the recovered information improves binary code hardening and binary decompilation.
从被剥离的二进制数据中恢复变量和数据结构信息是二进制程序分析中的一个突出挑战。虽然各种最先进的技术在特定情况下是有效的,但这种有效性可能不会普遍化。这主要是因为由于编译过程中的信息丢失,问题具有固有的不确定性。大多数现有技术都是确定性的,缺乏处理这种不确定性的系统方法。我们提出了一种新的概率技术用于变量和结构的恢复。引入随机变量来表示具有各种类型和结构属性的抽象内存位置的可能性,例如作为某些数据结构的字段。这些随机变量通过程序分析得到的概率约束联系起来。求解这些约束条件产生随机变量的后验概率,它本质上表示恢复结果。我们的实验表明,我们的技术实质上优于许多最先进的系统,包括IDA、Ghidra、Angr和Howard。我们的案例研究表明,恢复的信息改善了二进制代码加固和二进制反编译。
{"title":"OSPREY: Recovery of Variable and Data Structure via Probabilistic Analysis for Stripped Binary","authors":"Zhuo Zhang, Yapeng Ye, Wei You, Guanhong Tao, Wen-Chuan Lee, Yonghwi Kwon, Yousra Aafer, X. Zhang","doi":"10.1109/SP40001.2021.00051","DOIUrl":"https://doi.org/10.1109/SP40001.2021.00051","url":null,"abstract":"Recovering variables and data structure information from stripped binary is a prominent challenge in binary program analysis. While various state-of-the-art techniques are effective in specific settings, such effectiveness may not generalize. This is mainly because the problem is inherently uncertain due to the information loss in compilation. Most existing techniques are deterministic and lack a systematic way of handling such uncertainty. We propose a novel probabilistic technique for variable and structure recovery. Random variables are introduced to denote the likelihood of an abstract memory location having various types and structural properties such as being a field of some data structure. These random variables are connected through probabilistic constraints derived through program analysis. Solving these constraints produces the posterior probabilities of the random variables, which essentially denote the recovery results. Our experiments show that our technique substantially outperforms a number of state-of-the-art systems, including IDA, Ghidra, Angr, and Howard. Our case studies demonstrate the recovered information improves binary code hardening and binary decompilation.","PeriodicalId":6786,"journal":{"name":"2021 IEEE Symposium on Security and Privacy (SP)","volume":"15 1","pages":"813-832"},"PeriodicalIF":0.0,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75477393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
CanDID: Can-Do Decentralized Identity with Legacy Compatibility, Sybil-Resistance, and Accountability 坦率:具有遗产兼容性、抗sybil和问责性的Can-Do分散身份
Pub Date : 2021-05-01 DOI: 10.1109/SP40001.2021.00038
Deepak Maram, Harjasleen Malvai, Fan Zhang, Nerla Jean-Louis, Alexander Frolov, T. Kell, Tyrone Lobban, Christine Moy, A. Juels, Andrew K. Miller, ‡UIUC, §J, P. Morgan
We present CanDID, a platform for practical, user-friendly realization of decentralized identity, the idea of empowering end users with management of their own credentials.While decentralized identity promises to give users greater control over their private data, it burdens users with management of private keys, creating a significant risk of key loss. Existing and proposed approaches also presume the spontaneous availability of a credential-issuance ecosystem, creating a bootstrapping problem. They also omit essential functionality, like resistance to Sybil attacks and the ability to detect misbehaving or sanctioned users while preserving user privacy.CanDID addresses these challenges by issuing credentials in a user-friendly way that draws securely and privately on data from existing, unmodified web service providers. Such legacy compatibility similarly enables CanDID users to leverage their existing online accounts for recovery of lost keys. Using a decentralized committee of nodes, CanDID provides strong confidentiality for user’s keys, real-world identities, and data, yet prevents users from spawning multiple identities and allows identification (and blacklisting) of sanctioned users.We present the CanDID architecture and report on experiments demonstrating its practical performance.
我们提出了CanDID,一个实用的,用户友好的去中心化身份实现平台,授权最终用户管理自己的凭据。虽然去中心化身份可以让用户更好地控制他们的私人数据,但它给用户带来了管理私钥的负担,造成了密钥丢失的重大风险。现有的和提议的方法还假定凭据发行生态系统的自发可用性,从而产生了一个引导问题。它们还忽略了一些基本功能,比如抵抗Sybil攻击,以及在保护用户隐私的同时检测行为不端或受到制裁的用户的能力。CanDID以一种用户友好的方式颁发凭据,以安全和私密的方式从现有的、未修改的web服务提供商获取数据,从而解决了这些挑战。这种遗留兼容性同样使CanDID用户能够利用他们现有的在线帐户来恢复丢失的密钥。通过使用分散的节点委员会,CanDID为用户的密钥、真实世界的身份和数据提供了强大的机密性,同时防止用户产生多个身份,并允许对受制裁的用户进行识别(和黑名单)。我们提出了CanDID架构,并报告了实验证明其实际性能。
{"title":"CanDID: Can-Do Decentralized Identity with Legacy Compatibility, Sybil-Resistance, and Accountability","authors":"Deepak Maram, Harjasleen Malvai, Fan Zhang, Nerla Jean-Louis, Alexander Frolov, T. Kell, Tyrone Lobban, Christine Moy, A. Juels, Andrew K. Miller, ‡UIUC, §J, P. Morgan","doi":"10.1109/SP40001.2021.00038","DOIUrl":"https://doi.org/10.1109/SP40001.2021.00038","url":null,"abstract":"We present CanDID, a platform for practical, user-friendly realization of decentralized identity, the idea of empowering end users with management of their own credentials.While decentralized identity promises to give users greater control over their private data, it burdens users with management of private keys, creating a significant risk of key loss. Existing and proposed approaches also presume the spontaneous availability of a credential-issuance ecosystem, creating a bootstrapping problem. They also omit essential functionality, like resistance to Sybil attacks and the ability to detect misbehaving or sanctioned users while preserving user privacy.CanDID addresses these challenges by issuing credentials in a user-friendly way that draws securely and privately on data from existing, unmodified web service providers. Such legacy compatibility similarly enables CanDID users to leverage their existing online accounts for recovery of lost keys. Using a decentralized committee of nodes, CanDID provides strong confidentiality for user’s keys, real-world identities, and data, yet prevents users from spawning multiple identities and allows identification (and blacklisting) of sanctioned users.We present the CanDID architecture and report on experiments demonstrating its practical performance.","PeriodicalId":6786,"journal":{"name":"2021 IEEE Symposium on Security and Privacy (SP)","volume":"257 1","pages":"1348-1366"},"PeriodicalIF":0.0,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79554038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 58
StochFuzz: Sound and Cost-effective Fuzzing of Stripped Binaries by Incremental and Stochastic Rewriting 随机模糊:用增量和随机重写对剥离二进制数据进行有效的模糊化
Pub Date : 2021-05-01 DOI: 10.1109/SP40001.2021.00109
Zhuo Zhang, Wei You, Guanhong Tao, Yousra Aafer, Xuwei Liu, X. Zhang
Fuzzing stripped binaries poses many hard challenges as fuzzers require instrumenting binaries to collect runtime feedback for guiding input mutation. However, due to the lack of symbol information, correct instrumentation is difficult on stripped binaries. Existing techniques either rely on hardware and expensive dynamic binary translation engines such as QEMU, or make impractical assumptions such as binaries do not have inlined data. We observe that fuzzing is a highly repetitive procedure providing a large number of trial-and-error opportunities. As such, we propose a novel incremental and stochastic rewriting technique StochFuzz that piggy-backs on the fuzzing procedure. It generates many different versions of rewritten binaries whose validity can be approved/disapproved by numerous fuzzing runs. Probabilistic analysis is used to aggregate evidence collected through the sample runs and improve rewriting. The process eventually converges on a correctly rewritten binary. We evaluate StochFuzz on two sets of real-world programs and compare with five other baselines. The results show that StochFuzz outperforms state-of-the-art binary-only fuzzers (e.g., e9patch, ddisasm, and RetroWrite) in terms of soundness and cost-effectiveness and achieves performance comparable to source-based fuzzers. StochFuzz is publicly available [1].
模糊测试剥离的二进制文件带来了许多困难的挑战,因为模糊测试人员需要对二进制文件进行检测,以收集用于指导输入变化的运行时反馈。然而,由于缺乏符号信息,对剥离二进制文件进行正确的检测是困难的。现有技术要么依赖于硬件和昂贵的动态二进制翻译引擎(如QEMU),要么做出不切实际的假设(如二进制文件没有内联数据)。我们观察到,模糊测试是一个高度重复的过程,提供了大量的试错机会。因此,我们提出了一种新的增量和随机重写技术——随机模糊(stochastic fuzz)。它生成许多不同版本的重写二进制文件,这些文件的有效性可以通过多次模糊测试运行来批准/不批准。概率分析用于汇总通过样本运行收集到的证据,并改进重写。这个过程最终收敛于一个正确重写的二进制文件。我们在两组现实世界的程序上评估随机模糊,并与其他五个基线进行比较。结果表明,在可靠性和成本效益方面,StochFuzz优于最先进的纯二进制模糊器(例如,e9patch, ddisasm和RetroWrite),并实现了与基于源的模糊器相当的性能。随机模糊是公开可用的[1]。
{"title":"StochFuzz: Sound and Cost-effective Fuzzing of Stripped Binaries by Incremental and Stochastic Rewriting","authors":"Zhuo Zhang, Wei You, Guanhong Tao, Yousra Aafer, Xuwei Liu, X. Zhang","doi":"10.1109/SP40001.2021.00109","DOIUrl":"https://doi.org/10.1109/SP40001.2021.00109","url":null,"abstract":"Fuzzing stripped binaries poses many hard challenges as fuzzers require instrumenting binaries to collect runtime feedback for guiding input mutation. However, due to the lack of symbol information, correct instrumentation is difficult on stripped binaries. Existing techniques either rely on hardware and expensive dynamic binary translation engines such as QEMU, or make impractical assumptions such as binaries do not have inlined data. We observe that fuzzing is a highly repetitive procedure providing a large number of trial-and-error opportunities. As such, we propose a novel incremental and stochastic rewriting technique StochFuzz that piggy-backs on the fuzzing procedure. It generates many different versions of rewritten binaries whose validity can be approved/disapproved by numerous fuzzing runs. Probabilistic analysis is used to aggregate evidence collected through the sample runs and improve rewriting. The process eventually converges on a correctly rewritten binary. We evaluate StochFuzz on two sets of real-world programs and compare with five other baselines. The results show that StochFuzz outperforms state-of-the-art binary-only fuzzers (e.g., e9patch, ddisasm, and RetroWrite) in terms of soundness and cost-effectiveness and achieves performance comparable to source-based fuzzers. StochFuzz is publicly available [1].","PeriodicalId":6786,"journal":{"name":"2021 IEEE Symposium on Security and Privacy (SP)","volume":"6 1","pages":"659-676"},"PeriodicalIF":0.0,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87408085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
SoK: Security and Privacy in the Age of Commercial Drones SoK:商用无人机时代的安全与隐私
Pub Date : 2021-05-01 DOI: 10.1109/SP40001.2021.00005
Ben Nassi, Ron Bitton, R. Masuoka, A. Shabtai, Y. Elovici
As the number of drones increases and the era in which they begin to fill the skies approaches, an important question needs to be answered: From a security and privacy perspective, are society and drones really prepared to handle the challenges that a large volume of flights will create? In this paper, we investigate security and privacy in the age of commercial drones. First, we focus on the research question: Are drones and their ecosystems protected against attacks performed by malicious entities? We list a drone’s targets, present a methodology for reviewing attack and countermeasure methods, perform a comprehensive review, analyze scientific gaps, present conclusions, and discuss future research directions. Then, we focus on the research question: Is society protected against attacks conducted using drones? We list targets within society, profile the adversaries, review threats, present a methodology for reviewing countermeasures, perform a comprehensive review, analyze scientific gaps, present conclusions, and discuss future research directions. Finally, we focus on the primary research question: From the security and privacy perspective, are society and drones prepared to take their relationship one step further? Our analysis reveals that the technological means required to protect drones and society from one another has not yet been developed, and there is a tradeoff between the security and privacy of drones and that of society. That is, the level of security and privacy cannot be optimized concurrently for both entities, because the security and privacy of drones cannot be optimized without decreasing the security and privacy of society, and vice versa.
随着无人机数量的增加以及它们开始充斥天空的时代的临近,一个重要的问题需要回答:从安全和隐私的角度来看,社会和无人机真的准备好应对大量飞行将带来的挑战了吗?本文主要研究商用无人机时代的安全和隐私问题。首先,我们关注的研究问题是:无人机及其生态系统是否能够抵御恶意实体的攻击?我们列出了无人机的目标,提出了审查攻击和对策方法的方法,进行了全面的审查,分析科学差距,提出结论,并讨论了未来的研究方向。然后,我们专注于研究问题:社会是否受到无人机攻击的保护?我们列出社会中的目标,分析对手,审查威胁,提出审查对策的方法,进行全面审查,分析科学差距,提出结论,并讨论未来的研究方向。最后,我们关注主要的研究问题:从安全和隐私的角度来看,社会和无人机是否准备进一步发展他们的关系?我们的分析表明,保护无人机和社会彼此免受伤害所需的技术手段尚未发展起来,无人机的安全和隐私与社会的安全和隐私之间存在权衡。也就是说,两个实体的安全和隐私级别无法同时优化,因为无人机的安全和隐私优化不可能不降低社会的安全和隐私,反之亦然。
{"title":"SoK: Security and Privacy in the Age of Commercial Drones","authors":"Ben Nassi, Ron Bitton, R. Masuoka, A. Shabtai, Y. Elovici","doi":"10.1109/SP40001.2021.00005","DOIUrl":"https://doi.org/10.1109/SP40001.2021.00005","url":null,"abstract":"As the number of drones increases and the era in which they begin to fill the skies approaches, an important question needs to be answered: From a security and privacy perspective, are society and drones really prepared to handle the challenges that a large volume of flights will create? In this paper, we investigate security and privacy in the age of commercial drones. First, we focus on the research question: Are drones and their ecosystems protected against attacks performed by malicious entities? We list a drone’s targets, present a methodology for reviewing attack and countermeasure methods, perform a comprehensive review, analyze scientific gaps, present conclusions, and discuss future research directions. Then, we focus on the research question: Is society protected against attacks conducted using drones? We list targets within society, profile the adversaries, review threats, present a methodology for reviewing countermeasures, perform a comprehensive review, analyze scientific gaps, present conclusions, and discuss future research directions. Finally, we focus on the primary research question: From the security and privacy perspective, are society and drones prepared to take their relationship one step further? Our analysis reveals that the technological means required to protect drones and society from one another has not yet been developed, and there is a tradeoff between the security and privacy of drones and that of society. That is, the level of security and privacy cannot be optimized concurrently for both entities, because the security and privacy of drones cannot be optimized without decreasing the security and privacy of society, and vice versa.","PeriodicalId":6786,"journal":{"name":"2021 IEEE Symposium on Security and Privacy (SP)","volume":"C-30 1","pages":"1434-1451"},"PeriodicalIF":0.0,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84441815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 36
The Provable Security of Ed25519: Theory and Practice Ed25519的可证明安全性:理论与实践
Pub Date : 2021-05-01 DOI: 10.1109/SP40001.2021.00042
Jacqueline Brendel, C. Cremers, Dennis Jackson, Mang Zhao
A standard requirement for a signature scheme is that it is existentially unforgeable under chosen message attacks (EUF-CMA), alongside other properties of interest such as strong unforgeability (SUF-CMA), and resilience against key substitution attacks.Remarkably, no detailed proofs have ever been given for these security properties for EdDSA, and in particular its Ed25519 instantiations. Ed25519 is one of the most efficient and widely used signature schemes, and different instantiations of Ed25519 are used in protocols such as TLS 1.3, SSH, Tor, ZCash, and WhatsApp/Signal. The differences between these instantiations are subtle, and only supported by informal arguments, with many works assuming results can be directly transferred from Schnorr signatures. Similarly, several proofs of protocol security simply assume that Ed25519 satisfies properties such as EUF-CMA or SUF-CMA.In this work we provide the first detailed analysis and security proofs of Ed25519 signature schemes. While the design of the schemes follows the well-established Fiat-Shamir paradigm, which should guarantee existential unforgeability, there are many side cases and encoding details that complicate the proofs, and all other security properties needed to be proven independently.Our work provides scientific rationale for choosing among several Ed25519 variants and understanding their properties, fills a much needed proof gap in modern protocol proofs that use these signatures, and supports further standardisation efforts.
签名方案的标准要求是在选择消息攻击(EUF-CMA)下存在不可伪造性,以及其他感兴趣的属性,如强不可伪造性(SUF-CMA)和对密钥替换攻击的弹性。值得注意的是,对于EdDSA的这些安全属性,特别是它的Ed25519实例,还没有给出详细的证明。Ed25519是最有效和最广泛使用的签名方案之一,并且Ed25519的不同实例用于诸如TLS 1.3, SSH, Tor, ZCash和WhatsApp/Signal等协议中。这些实例化之间的差异是微妙的,只有非正式的参数支持,许多作品假设结果可以直接从Schnorr签名转移。类似地,协议安全性的几个证明只是假设Ed25519满足EUF-CMA或SUF-CMA等属性。本文首次对Ed25519签名方案进行了详细的分析和安全性证明。虽然方案的设计遵循公认的Fiat-Shamir范式,这应该保证存在的不可伪造性,但有许多附带情况和编码细节使证明复杂化,并且所有其他安全属性都需要独立证明。我们的工作为选择几种Ed25519变体并了解其特性提供了科学依据,填补了使用这些签名的现代协议证明中急需的证明空白,并支持进一步的标准化工作。
{"title":"The Provable Security of Ed25519: Theory and Practice","authors":"Jacqueline Brendel, C. Cremers, Dennis Jackson, Mang Zhao","doi":"10.1109/SP40001.2021.00042","DOIUrl":"https://doi.org/10.1109/SP40001.2021.00042","url":null,"abstract":"A standard requirement for a signature scheme is that it is existentially unforgeable under chosen message attacks (EUF-CMA), alongside other properties of interest such as strong unforgeability (SUF-CMA), and resilience against key substitution attacks.Remarkably, no detailed proofs have ever been given for these security properties for EdDSA, and in particular its Ed25519 instantiations. Ed25519 is one of the most efficient and widely used signature schemes, and different instantiations of Ed25519 are used in protocols such as TLS 1.3, SSH, Tor, ZCash, and WhatsApp/Signal. The differences between these instantiations are subtle, and only supported by informal arguments, with many works assuming results can be directly transferred from Schnorr signatures. Similarly, several proofs of protocol security simply assume that Ed25519 satisfies properties such as EUF-CMA or SUF-CMA.In this work we provide the first detailed analysis and security proofs of Ed25519 signature schemes. While the design of the schemes follows the well-established Fiat-Shamir paradigm, which should guarantee existential unforgeability, there are many side cases and encoding details that complicate the proofs, and all other security properties needed to be proven independently.Our work provides scientific rationale for choosing among several Ed25519 variants and understanding their properties, fills a much needed proof gap in modern protocol proofs that use these signatures, and supports further standardisation efforts.","PeriodicalId":6786,"journal":{"name":"2021 IEEE Symposium on Security and Privacy (SP)","volume":"19 1","pages":"1659-1676"},"PeriodicalIF":0.0,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83960589","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 38
Wolverine: Fast, Scalable, and Communication-Efficient Zero-Knowledge Proofs for Boolean and Arithmetic Circuits 金刚狼:布尔和算术电路的快速、可扩展和通信高效的零知识证明
Pub Date : 2021-05-01 DOI: 10.1109/SP40001.2021.00056
Chenkai Weng, Kang Yang, Jonathan Katz, X. Wang
Efficient zero-knowledge (ZK) proofs for arbitrary boolean or arithmetic circuits have recently attracted much attention. Existing solutions suffer from either significant prover overhead (i.e., high memory usage) or relatively high communication complexity (at least κ bits per gate, for computational security parameter κ). In this paper, we propose a new protocol for constant-round interactive ZK proofs that simultaneously allows for an efficient prover with asymptotically optimal memory usage and significantly lower communication compared to protocols with similar memory efficiency. Specifically:•The prover in our ZK protocol has linear running time and, perhaps more importantly, memory usage linear in the memory needed to evaluate the circuit non-cryptographically. This allows our proof system to scale easily to very large circuits.•for statistical security parameter ρ = 40, our ZK protocol communicates roughly 9 bits/gate for boolean circuits and 2–4 field elements/gate for arithmetic circuits over large fields.Using 5 threads, 400 MB of memory, and a 200 Mbps network to evaluate a circuit with hundreds of billions of gates, our implementation (ρ = 40, κ = 128) runs at a rate of 0.45 μs/gate in the boolean case, and 1.6 μs/gate for an arithmetic circuit over a 61-bit field.We also present an improved subfield Vector Oblivious Linear Evaluation (sVOLE) protocol with malicious security that is of independent interest.
任意布尔或算术电路的有效零知识证明近年来引起了人们的广泛关注。现有的解决方案要么存在显著的证明器开销(即,高内存使用),要么存在相对较高的通信复杂性(对于计算安全参数κ,每个门至少κ位)。在本文中,我们提出了一种用于恒轮交互ZK证明的新协议,该协议同时允许具有渐近最优内存使用的高效证明者,并且与具有相似内存效率的协议相比,通信显著降低。具体来说:•我们的ZK协议中的证明程序具有线性运行时间,也许更重要的是,内存使用在内存中是非加密评估电路所需的线性。这使得我们的证明系统可以很容易地扩展到非常大的电路。•对于统计安全参数ρ = 40,我们的ZK协议通信大约9位/门布尔电路和2-4个字段元素/门在大字段算术电路。使用5个线程,400 MB内存和200 Mbps网络来评估具有数千亿门的电路,我们的实现(ρ = 40, κ = 128)在布尔情况下以0.45 μs/门的速率运行,在61位字段上的算术电路以1.6 μs/门的速率运行。我们还提出了一种改进的子域向量无关线性求值(sVOLE)协议,该协议具有独立的恶意安全性。
{"title":"Wolverine: Fast, Scalable, and Communication-Efficient Zero-Knowledge Proofs for Boolean and Arithmetic Circuits","authors":"Chenkai Weng, Kang Yang, Jonathan Katz, X. Wang","doi":"10.1109/SP40001.2021.00056","DOIUrl":"https://doi.org/10.1109/SP40001.2021.00056","url":null,"abstract":"Efficient zero-knowledge (ZK) proofs for arbitrary boolean or arithmetic circuits have recently attracted much attention. Existing solutions suffer from either significant prover overhead (i.e., high memory usage) or relatively high communication complexity (at least κ bits per gate, for computational security parameter κ). In this paper, we propose a new protocol for constant-round interactive ZK proofs that simultaneously allows for an efficient prover with asymptotically optimal memory usage and significantly lower communication compared to protocols with similar memory efficiency. Specifically:•The prover in our ZK protocol has linear running time and, perhaps more importantly, memory usage linear in the memory needed to evaluate the circuit non-cryptographically. This allows our proof system to scale easily to very large circuits.•for statistical security parameter ρ = 40, our ZK protocol communicates roughly 9 bits/gate for boolean circuits and 2–4 field elements/gate for arithmetic circuits over large fields.Using 5 threads, 400 MB of memory, and a 200 Mbps network to evaluate a circuit with hundreds of billions of gates, our implementation (ρ = 40, κ = 128) runs at a rate of 0.45 μs/gate in the boolean case, and 1.6 μs/gate for an arithmetic circuit over a 61-bit field.We also present an improved subfield Vector Oblivious Linear Evaluation (sVOLE) protocol with malicious security that is of independent interest.","PeriodicalId":6786,"journal":{"name":"2021 IEEE Symposium on Security and Privacy (SP)","volume":"8 11 1","pages":"1074-1091"},"PeriodicalIF":0.0,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88307049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 94
CrossTalk: Speculative Data Leaks Across Cores Are Real 串扰:推测的跨核数据泄漏是真实的
Pub Date : 2021-05-01 DOI: 10.1109/SP40001.2021.00020
Hany Ragab, Alyssa Milburn, Kaveh Razavi, H. Bos, Cristiano Giuffrida
Recent transient execution attacks have demonstrated that attackers may leak sensitive information across security boundaries on a shared CPU core. Up until now, it seemed possible to prevent this by isolating potential victims and attackers on separate cores. In this paper, we show that the situation is more serious, as transient execution attacks can leak data across different cores on many modern Intel CPUs.We do so by investigating the behavior of x86 instructions, and in particular, we focus on complex microcoded instructions which perform offcore requests. Combined with transient execution vulnerabilities such as Micro-architectural Data Sampling (MDS), these operations can reveal internal CPU state. Using performance counters, we build a profiler, CROSSTALK, to examine the number and nature of such operations for many x86 instructions, and find that some instructions read data from a staging buffer which is shared between all CPU cores.To demonstrate the security impact of this behavior, we present the first cross-core attack using transient execution, showing that even the seemingly-innocuous CPUID instruction can be used by attackers to sample the entire staging buffer containing sensitive data – most importantly, output from the hardware random number generator (RNG) – across cores. We show that this can be exploited in practice to attack SGX enclaves running on a completely different core, where an attacker can control leakage using practical performance degradation attacks, and demonstrate that we can successfully determine enclave private keys. Since existing mitigations which rely on spatial or temporal partitioning are largely ineffective to prevent our proposed attack, we also discuss potential new mitigation techniques.
最近的瞬态执行攻击表明,攻击者可能会在共享CPU核心上跨安全边界泄露敏感信息。到目前为止,似乎可以通过将潜在的受害者和攻击者隔离在不同的核心上来防止这种情况。在本文中,我们展示了更严重的情况,因为瞬态执行攻击可以在许多现代英特尔cpu的不同核心上泄漏数据。我们通过研究x86指令的行为来做到这一点,我们特别关注执行场外请求的复杂微编码指令。这些操作与诸如微体系结构数据采样(MDS)之类的瞬态执行漏洞相结合,可以暴露CPU的内部状态。使用性能计数器,我们构建了一个分析器CROSSTALK,来检查许多x86指令的此类操作的数量和性质,并发现一些指令从所有CPU内核之间共享的分段缓冲区读取数据。为了演示这种行为的安全影响,我们展示了第一个使用瞬态执行的跨核攻击,表明攻击者甚至可以使用看似无害的CPUID指令对包含敏感数据(最重要的是硬件随机数生成器(RNG)的输出)的整个分段缓冲区进行跨核采样。我们展示了在实践中可以利用这一点来攻击在完全不同的核心上运行的SGX enclave,攻击者可以使用实际的性能降低攻击来控制泄漏,并展示了我们可以成功地确定enclave私钥。由于依赖于空间或时间划分的现有缓解措施在很大程度上无法阻止我们提出的攻击,因此我们还讨论了潜在的新缓解技术。
{"title":"CrossTalk: Speculative Data Leaks Across Cores Are Real","authors":"Hany Ragab, Alyssa Milburn, Kaveh Razavi, H. Bos, Cristiano Giuffrida","doi":"10.1109/SP40001.2021.00020","DOIUrl":"https://doi.org/10.1109/SP40001.2021.00020","url":null,"abstract":"Recent transient execution attacks have demonstrated that attackers may leak sensitive information across security boundaries on a shared CPU core. Up until now, it seemed possible to prevent this by isolating potential victims and attackers on separate cores. In this paper, we show that the situation is more serious, as transient execution attacks can leak data across different cores on many modern Intel CPUs.We do so by investigating the behavior of x86 instructions, and in particular, we focus on complex microcoded instructions which perform offcore requests. Combined with transient execution vulnerabilities such as Micro-architectural Data Sampling (MDS), these operations can reveal internal CPU state. Using performance counters, we build a profiler, CROSSTALK, to examine the number and nature of such operations for many x86 instructions, and find that some instructions read data from a staging buffer which is shared between all CPU cores.To demonstrate the security impact of this behavior, we present the first cross-core attack using transient execution, showing that even the seemingly-innocuous CPUID instruction can be used by attackers to sample the entire staging buffer containing sensitive data – most importantly, output from the hardware random number generator (RNG) – across cores. We show that this can be exploited in practice to attack SGX enclaves running on a completely different core, where an attacker can control leakage using practical performance degradation attacks, and demonstrate that we can successfully determine enclave private keys. Since existing mitigations which rely on spatial or temporal partitioning are largely ineffective to prevent our proposed attack, we also discuss potential new mitigation techniques.","PeriodicalId":6786,"journal":{"name":"2021 IEEE Symposium on Security and Privacy (SP)","volume":"9 1","pages":"1852-1867"},"PeriodicalIF":0.0,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88988556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 108
期刊
2021 IEEE Symposium on Security and Privacy (SP)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1