首页 > 最新文献

2021 IEEE Symposium on Security and Privacy (SP)最新文献

英文 中文
Black Widow: Blackbox Data-driven Web Scanning 黑寡妇:黑盒子数据驱动的网页扫描
Pub Date : 2021-05-01 DOI: 10.1109/SP40001.2021.00022
Benjamin Eriksson, Giancarlo Pellegrino, A. Sabelfeld
Modern web applications are an integral part of our digital lives. As we put more trust in web applications, the need for security increases. At the same time, detecting vulnerabilities in web applications has become increasingly hard, due to the complexity, dynamism, and reliance on third-party components. Blackbox vulnerability scanning is especially challenging because (i) for deep penetration of web applications scanners need to exercise such browsing behavior as user interaction and asynchrony, and (ii) for detection of nontrivial injection attacks, such as stored cross-site scripting (XSS), scanners need to discover inter-page data dependencies.This paper illuminates key challenges for crawling and scanning the modern web. Based on these challenges we identify three core pillars for deep crawling and scanning: navigation modeling, traversing, and tracking inter-state dependencies. While prior efforts are largely limited to the separate pillars, we suggest an approach that leverages all three. We develop Black Widow, a blackbox data-driven approach to web crawling and scanning. We demonstrate the effectiveness of the crawling by code coverage improvements ranging from 63% to 280% compared to other crawlers across all applications. Further, we demonstrate the effectiveness of the web vulnerability scanning by featuring no false positives and finding more cross-site scripting vulnerabilities than previous methods. In older applications, used in previous research, we find vulnerabilities that the other methods miss. We also find new vulnerabili-ties in production software, including HotCRP, osCommerce, PrestaShop and WordPress.
现代网络应用程序是我们数字生活中不可或缺的一部分。随着我们对web应用程序的信任越来越多,对安全性的需求也在增加。同时,由于web应用程序的复杂性、动态性和对第三方组件的依赖性,检测web应用程序中的漏洞变得越来越困难。黑盒漏洞扫描尤其具有挑战性,因为(i)为了深入渗透web应用程序,扫描仪需要检测用户交互和异步等浏览行为,以及(ii)为了检测重要的注入攻击,例如存储跨站点脚本(XSS),扫描仪需要发现页面间的数据依赖关系。本文阐明了爬行和扫描现代网络的主要挑战。基于这些挑战,我们确定了深度爬行和扫描的三个核心支柱:导航建模、遍历和跟踪状态间依赖关系。虽然先前的努力主要局限于单独的支柱,但我们建议采用一种利用所有三个支柱的方法。我们开发了黑寡妇,一个黑箱数据驱动的方法来网络爬行和扫描。与所有应用程序中的其他爬虫相比,我们通过代码覆盖率改进来证明爬行的有效性,从63%到280%不等。此外,我们通过无误报和发现比以前的方法更多的跨站点脚本漏洞来证明web漏洞扫描的有效性。在之前的研究中使用的旧应用程序中,我们发现了其他方法没有发现的漏洞。我们还在生产软件中发现了新的漏洞,包括HotCRP、osCommerce、PrestaShop和WordPress。
{"title":"Black Widow: Blackbox Data-driven Web Scanning","authors":"Benjamin Eriksson, Giancarlo Pellegrino, A. Sabelfeld","doi":"10.1109/SP40001.2021.00022","DOIUrl":"https://doi.org/10.1109/SP40001.2021.00022","url":null,"abstract":"Modern web applications are an integral part of our digital lives. As we put more trust in web applications, the need for security increases. At the same time, detecting vulnerabilities in web applications has become increasingly hard, due to the complexity, dynamism, and reliance on third-party components. Blackbox vulnerability scanning is especially challenging because (i) for deep penetration of web applications scanners need to exercise such browsing behavior as user interaction and asynchrony, and (ii) for detection of nontrivial injection attacks, such as stored cross-site scripting (XSS), scanners need to discover inter-page data dependencies.This paper illuminates key challenges for crawling and scanning the modern web. Based on these challenges we identify three core pillars for deep crawling and scanning: navigation modeling, traversing, and tracking inter-state dependencies. While prior efforts are largely limited to the separate pillars, we suggest an approach that leverages all three. We develop Black Widow, a blackbox data-driven approach to web crawling and scanning. We demonstrate the effectiveness of the crawling by code coverage improvements ranging from 63% to 280% compared to other crawlers across all applications. Further, we demonstrate the effectiveness of the web vulnerability scanning by featuring no false positives and finding more cross-site scripting vulnerabilities than previous methods. In older applications, used in previous research, we find vulnerabilities that the other methods miss. We also find new vulnerabili-ties in production software, including HotCRP, osCommerce, PrestaShop and WordPress.","PeriodicalId":6786,"journal":{"name":"2021 IEEE Symposium on Security and Privacy (SP)","volume":"36 1","pages":"1125-1142"},"PeriodicalIF":0.0,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78743107","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
High-Assurance Cryptography in the Spectre Era 幽灵时代的高保证密码学
Pub Date : 2021-05-01 DOI: 10.1109/SP40001.2021.00046
G. Barthe, S. Cauligi, B. Grégoire, Adrien Koutsos, Kevin Liao, Tiago Oliveira, Swarn Priya, Tamara Rezk, P. Schwabe
High-assurance cryptography leverages methods from program verification and cryptography engineering to deliver efficient cryptographic software with machine-checked proofs of memory safety, functional correctness, provable security, and absence of timing leaks. Traditionally, these guarantees are established under a sequential execution semantics. However, this semantics is not aligned with the behavior of modern processors that make use of speculative execution to improve performance. This mismatch, combined with the high-profile Spectre-style attacks that exploit speculative execution, naturally casts doubts on the robustness of high-assurance cryptography guarantees. In this paper, we dispel these doubts by showing that the benefits of high-assurance cryptography extend to speculative execution, costing only a modest performance overhead. We build atop the Jasmin verification framework an end-to-end approach for proving properties of cryptographic software under speculative execution, and validate our approach experimentally with efficient, functionally correct assembly implementations of ChaCha20 and Poly1305, which are secure against both traditional timing and speculative execution attacks.
高保证密码学利用程序验证和密码学工程的方法,提供具有内存安全性、功能正确性、可证明安全性和没有时间泄漏的机器检查证明的高效密码学软件。传统上,这些保证是在顺序执行语义下建立的。然而,这种语义与使用推测执行来提高性能的现代处理器的行为不一致。这种不匹配,再加上利用投机执行的引人注目的幽灵式攻击,自然会让人对高保证加密保证的健壮性产生怀疑。在本文中,我们通过展示高保证密码学的好处扩展到推测执行来消除这些疑虑,仅花费适度的性能开销。我们在Jasmin验证框架的基础上构建了一种端到端方法,用于在推测执行下证明加密软件的属性,并通过高效、功能正确的ChaCha20和Poly1305组装实现实验验证了我们的方法,这两种方法都是安全的,可以抵御传统的定时和推测执行攻击。
{"title":"High-Assurance Cryptography in the Spectre Era","authors":"G. Barthe, S. Cauligi, B. Grégoire, Adrien Koutsos, Kevin Liao, Tiago Oliveira, Swarn Priya, Tamara Rezk, P. Schwabe","doi":"10.1109/SP40001.2021.00046","DOIUrl":"https://doi.org/10.1109/SP40001.2021.00046","url":null,"abstract":"High-assurance cryptography leverages methods from program verification and cryptography engineering to deliver efficient cryptographic software with machine-checked proofs of memory safety, functional correctness, provable security, and absence of timing leaks. Traditionally, these guarantees are established under a sequential execution semantics. However, this semantics is not aligned with the behavior of modern processors that make use of speculative execution to improve performance. This mismatch, combined with the high-profile Spectre-style attacks that exploit speculative execution, naturally casts doubts on the robustness of high-assurance cryptography guarantees. In this paper, we dispel these doubts by showing that the benefits of high-assurance cryptography extend to speculative execution, costing only a modest performance overhead. We build atop the Jasmin verification framework an end-to-end approach for proving properties of cryptographic software under speculative execution, and validate our approach experimentally with efficient, functionally correct assembly implementations of ChaCha20 and Poly1305, which are secure against both traditional timing and speculative execution attacks.","PeriodicalId":6786,"journal":{"name":"2021 IEEE Symposium on Security and Privacy (SP)","volume":"7 1","pages":"1884-1901"},"PeriodicalIF":0.0,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73006134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
Trust, But Verify: A Longitudinal Analysis Of Android OEM Compliance and Customization 信任,但要验证:Android OEM合规和定制的纵向分析
Pub Date : 2021-05-01 DOI: 10.1109/SP40001.2021.00074
Andrea Possemato, Simone Aonzo, D. Balzarotti, Y. Fratantonio
Nowadays, more than two billions of mobile devices run Android OS. At the core of this success are the open source nature of the Android Open Source Project and vendors’ ability to customize the code base and ship it on their own devices. While the possibility of customizations is beneficial to vendors, they can potentially lead to compatibility and security problems. To prevent these problems, Google developed a set of requirements that must be satisfied for a vendor to brand its devices as "Android," and recently introduced Project Treble as an effort to partition vendor customizations. These requirements are encoded as part of a textual document (called Compatibility Definition Document, or CDD) and various automated tests. This paper performs the first longitudinal study on Android OEM customizations. We first built a dataset of 2,907 ROMs, spanning across 42 different vendors, and covering Android versions from 1.6 to 9.0 (years 2009–2020). We then developed an analysis framework and pipeline to extract each ROM’s customization layers and evaluate it across several metrics. For example, we analyze ROMs to determine whether they are compliant with respect to the various requirements and whether their customizations negatively affect the security posture of the overall device. In the process, we focus on various aspects, ranging from security hardening of binaries, SELinux policies, Android init scripts, and kernel security hardening techniques. Our results are worrisome. We found 579 over 2,907 (~20%) of the ROMs have at least one violation for the CDD related to their Android version — incredibly, 11 of them are branded by Google itself. Some of our findings suggest that vendors often go out of their way to bypass or "comment out" safety nets added by the Android security team. In other cases, we found ROMs that modify init scripts to launch at boot outdated versions (with known CVEs and public POCs) of programs as root and reachable from a remote attacker (e.g., tcpdump ). This paper shows that Google’s efforts are not enough, and we offer several recommendations on how to improve the compliance check pipelines.
如今,超过20亿的移动设备运行Android操作系统。这一成功的核心是Android开源项目的开源特性,以及供应商定制代码库并将其发布到自己的设备上的能力。虽然自定义的可能性对供应商是有益的,但它们可能会导致兼容性和安全性问题。为了防止这些问题,谷歌制定了一套要求,供应商必须满足这些要求才能将其设备标记为“Android”,并且最近推出了Project Treble,作为对供应商定制进行分区的努力。这些需求被编码为文本文档(称为兼容性定义文档,或CDD)和各种自动化测试的一部分。本文首次对Android OEM定制进行了纵向研究。我们首先构建了一个包含2,907个rom的数据集,涵盖了42个不同的供应商,涵盖了从1.6到9.0(2009-2020年)的Android版本。然后,我们开发了一个分析框架和管道来提取每个ROM的定制层,并跨几个指标对其进行评估。例如,我们分析rom以确定它们是否符合各种需求,以及它们的定制是否对整个设备的安全状态产生负面影响。在这个过程中,我们将关注各个方面,包括二进制文件的安全加固、SELinux策略、Android init脚本和内核安全加固技术。我们的结果令人担忧。我们发现,在2907个(约20%)的rom中,有579个至少有一个与Android版本相关的CDD违规,令人难以置信的是,其中11个是谷歌自己的品牌。我们的一些发现表明,供应商经常绕过或“评论”Android安全团队添加的安全网。在其他情况下,我们发现修改初始化脚本的rom在引导时以root身份启动过时版本的程序(具有已知的cve和公共poc),并且可以从远程攻击者(例如tcpdump)访问。本文表明Google的努力是不够的,我们提供了一些关于如何改进遵从性检查管道的建议。
{"title":"Trust, But Verify: A Longitudinal Analysis Of Android OEM Compliance and Customization","authors":"Andrea Possemato, Simone Aonzo, D. Balzarotti, Y. Fratantonio","doi":"10.1109/SP40001.2021.00074","DOIUrl":"https://doi.org/10.1109/SP40001.2021.00074","url":null,"abstract":"Nowadays, more than two billions of mobile devices run Android OS. At the core of this success are the open source nature of the Android Open Source Project and vendors’ ability to customize the code base and ship it on their own devices. While the possibility of customizations is beneficial to vendors, they can potentially lead to compatibility and security problems. To prevent these problems, Google developed a set of requirements that must be satisfied for a vendor to brand its devices as \"Android,\" and recently introduced Project Treble as an effort to partition vendor customizations. These requirements are encoded as part of a textual document (called Compatibility Definition Document, or CDD) and various automated tests. This paper performs the first longitudinal study on Android OEM customizations. We first built a dataset of 2,907 ROMs, spanning across 42 different vendors, and covering Android versions from 1.6 to 9.0 (years 2009–2020). We then developed an analysis framework and pipeline to extract each ROM’s customization layers and evaluate it across several metrics. For example, we analyze ROMs to determine whether they are compliant with respect to the various requirements and whether their customizations negatively affect the security posture of the overall device. In the process, we focus on various aspects, ranging from security hardening of binaries, SELinux policies, Android init scripts, and kernel security hardening techniques. Our results are worrisome. We found 579 over 2,907 (~20%) of the ROMs have at least one violation for the CDD related to their Android version — incredibly, 11 of them are branded by Google itself. Some of our findings suggest that vendors often go out of their way to bypass or \"comment out\" safety nets added by the Android security team. In other cases, we found ROMs that modify init scripts to launch at boot outdated versions (with known CVEs and public POCs) of programs as root and reachable from a remote attacker (e.g., tcpdump ). This paper shows that Google’s efforts are not enough, and we offer several recommendations on how to improve the compliance check pipelines.","PeriodicalId":6786,"journal":{"name":"2021 IEEE Symposium on Security and Privacy (SP)","volume":"18 1","pages":"87-102"},"PeriodicalIF":0.0,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73240800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
SoK: Computer-Aided Cryptography 计算机辅助密码学
Pub Date : 2021-05-01 DOI: 10.1109/SP40001.2021.00008
M. Barbosa, G. Barthe, K. Bhargavan, B. Blanchet, C. Cremers, Kevin Liao, Bryan Parno
Computer-aided cryptography is an active area of research that develops and applies formal, machine-checkable approaches to the design, analysis, and implementation of cryptography. We present a cross-cutting systematization of the computer-aided cryptography literature, focusing on three main areas: (i) design-level security (both symbolic security and computational security), (ii) functional correctness and efficiency, and (iii) implementation-level security (with a focus on digital side-channel resistance). In each area, we first clarify the role of computer-aided cryptography—how it can help and what the caveats are—in addressing current challenges. We next present a taxonomy of state-of-the-art tools, comparing their accuracy, scope, trustworthiness, and usability. Then, we highlight their main achievements, trade-offs, and research challenges. After covering the three main areas, we present two case studies. First, we study efforts in combining tools focused on different areas to consolidate the guarantees they can provide. Second, we distill the lessons learned from the computer-aided cryptography community’s involvement in the TLS 1.3 standardization effort. Finally, we conclude with recommendations to paper authors, tool developers, and standardization bodies moving forward.
计算机辅助密码学是一个活跃的研究领域,它开发并应用正式的、机器可检查的方法来设计、分析和实现密码学。我们提出了计算机辅助密码学文献的横切系统化,重点关注三个主要领域:(i)设计级安全(符号安全和计算安全),(ii)功能正确性和效率,以及(iii)实现级安全(重点关注数字侧信道阻力)。在每个领域中,我们首先阐明了计算机辅助密码学在解决当前挑战方面的作用——它如何提供帮助以及需要注意的事项。接下来,我们将介绍最新工具的分类,比较它们的准确性、范围、可信度和可用性。然后,我们强调了他们的主要成就,权衡和研究挑战。在介绍了三个主要领域之后,我们将介绍两个案例研究。首先,我们研究如何将侧重于不同领域的工具组合起来,以巩固它们所能提供的保证。其次,我们总结了计算机辅助密码学社区参与TLS 1.3标准化工作的经验教训。最后,我们总结了对论文作者、工具开发人员和标准化组织的建议。
{"title":"SoK: Computer-Aided Cryptography","authors":"M. Barbosa, G. Barthe, K. Bhargavan, B. Blanchet, C. Cremers, Kevin Liao, Bryan Parno","doi":"10.1109/SP40001.2021.00008","DOIUrl":"https://doi.org/10.1109/SP40001.2021.00008","url":null,"abstract":"Computer-aided cryptography is an active area of research that develops and applies formal, machine-checkable approaches to the design, analysis, and implementation of cryptography. We present a cross-cutting systematization of the computer-aided cryptography literature, focusing on three main areas: (i) design-level security (both symbolic security and computational security), (ii) functional correctness and efficiency, and (iii) implementation-level security (with a focus on digital side-channel resistance). In each area, we first clarify the role of computer-aided cryptography—how it can help and what the caveats are—in addressing current challenges. We next present a taxonomy of state-of-the-art tools, comparing their accuracy, scope, trustworthiness, and usability. Then, we highlight their main achievements, trade-offs, and research challenges. After covering the three main areas, we present two case studies. First, we study efforts in combining tools focused on different areas to consolidate the guarantees they can provide. Second, we distill the lessons learned from the computer-aided cryptography community’s involvement in the TLS 1.3 standardization effort. Finally, we conclude with recommendations to paper authors, tool developers, and standardization bodies moving forward.","PeriodicalId":6786,"journal":{"name":"2021 IEEE Symposium on Security and Privacy (SP)","volume":"7 1","pages":"777-795"},"PeriodicalIF":0.0,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82305467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 91
Merkle2: A Low-Latency Transparency Log System Merkle2:一个低延迟透明日志系统
Pub Date : 2021-05-01 DOI: 10.1109/SP40001.2021.00088
Yuncong Hu, Kian Hooshmand, Harika Kalidhindi, Seung Jin Yang, R. A. Popa
Transparency logs are designed to help users audit untrusted servers. For example, Certificate Transparency (CT) enables users to detect when a compromised Certificate Authority (CA) has issued a fake certificate. Practical state-of-the-art transparency log systems, however, suffer from high monitoring costs when used for low-latency applications. To reduce monitoring costs, such systems often require users to wait an hour or more for their updates to take effect, inhibiting low-latency applications. We propose Merkle2, a transparency log system that supports both efficient monitoring and low-latency updates. To achieve this goal, we construct a new multi-dimensional, authenticated data structure that nests two types of Merkle trees, hence the name of our system, Merkle2. Using this data structure, we then design a transparency log system with efficient monitoring and lookup protocols that enables low-latency updates. In particular, all the operations in Merkle2 are independent of update intervals and are (poly)logarithmic to the number of entries in the log. Merkle2 not only has excellent asymptotics when compared to prior work, but is also efficient in practice. Our evaluation shows that Merkle2 propagates updates in as little as 1 second and can support 100× more users than state-of-the-art transparency logs.
透明日志旨在帮助用户审计不受信任的服务器。例如,证书透明度(Certificate Transparency, CT)使用户能够检测到一个受损的证书颁发机构(Certificate Authority, CA)何时颁发了假证书。然而,实用的最先进的透明日志系统在用于低延迟应用程序时,监控成本很高。为了降低监控成本,此类系统通常要求用户等待一个小时或更长时间才能使更新生效,从而抑制了低延迟应用程序。我们提出了Merkle2,这是一个透明的日志系统,支持高效的监控和低延迟的更新。为了实现这一目标,我们构建了一个新的多维的、经过身份验证的数据结构,其中嵌套了两种类型的Merkle树,因此我们的系统被命名为Merkle2。使用这个数据结构,我们设计了一个透明的日志系统,它具有高效的监控和查找协议,可以实现低延迟的更新。特别是,Merkle2中的所有操作都独立于更新间隔,并且与日志中的条目数呈(多)对数关系。Merkle2不仅与之前的工作相比具有很好的渐近性,而且在实践中也是高效的。我们的评估表明,Merkle2在短短1秒内传播更新,并且可以支持比最先进的透明日志多100倍的用户。
{"title":"Merkle2: A Low-Latency Transparency Log System","authors":"Yuncong Hu, Kian Hooshmand, Harika Kalidhindi, Seung Jin Yang, R. A. Popa","doi":"10.1109/SP40001.2021.00088","DOIUrl":"https://doi.org/10.1109/SP40001.2021.00088","url":null,"abstract":"Transparency logs are designed to help users audit untrusted servers. For example, Certificate Transparency (CT) enables users to detect when a compromised Certificate Authority (CA) has issued a fake certificate. Practical state-of-the-art transparency log systems, however, suffer from high monitoring costs when used for low-latency applications. To reduce monitoring costs, such systems often require users to wait an hour or more for their updates to take effect, inhibiting low-latency applications. We propose Merkle2, a transparency log system that supports both efficient monitoring and low-latency updates. To achieve this goal, we construct a new multi-dimensional, authenticated data structure that nests two types of Merkle trees, hence the name of our system, Merkle2. Using this data structure, we then design a transparency log system with efficient monitoring and lookup protocols that enables low-latency updates. In particular, all the operations in Merkle2 are independent of update intervals and are (poly)logarithmic to the number of entries in the log. Merkle2 not only has excellent asymptotics when compared to prior work, but is also efficient in practice. Our evaluation shows that Merkle2 propagates updates in as little as 1 second and can support 100× more users than state-of-the-art transparency logs.","PeriodicalId":6786,"journal":{"name":"2021 IEEE Symposium on Security and Privacy (SP)","volume":"10 1","pages":"285-303"},"PeriodicalIF":0.0,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88653714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
Cross-Domain Access Control Encryption: Arbitrary-policy, Constant-size, Efficient 跨域访问控制加密:任意策略,固定大小,高效
Pub Date : 2021-05-01 DOI: 10.1109/SP40001.2021.00023
Xiuhua Wang, Sherman S. M. Chow
Access control is a fundamental keystone in security. Damgard, Haagh, and Orlandi (TCC 2016) introduced access˚ control encryption (ACE) that enforces no-read and no-write rules without revealing the senders, receivers, or the content of the encrypted traffic. Existing designs of ACE for arbitrary policy (covering all possibilities of read/write relationship) rely on indistinguishability obfuscation or lattice-based assumptions, with either exponential-size ciphertexts or circuit realization of policy. Also, their designs mandate a private sanitizer key to remain perpetually online for sanitization. The only existing scheme that can afford a public sanitizer key supports only simple policies. To summarize, state-of-the-art ACE schemes only feature at most two of the following desirable properties: arbitrarypolicy, constant-size (ciphertext), and efficient (sanitization). This paper introduces an ACE scheme for arbitrary policy without sanitizer key, which solves the open question posed by Kim and Wu (Asiacrypt 2017). We also put forth the notion of cross-domain ACE, separating the key generator into the sender-authority and receiver-authority. Our scheme requires structure-preserving signatures, non-interactive zero-knowledge proof, and sanitizable identity-based broadcast encryption as the building blocks. It can be instantiated directly from pairing-based assumptions and features constant ciphertext size. We also prototyped our scheme and demonstrated its practical efficiency.
访问控制是安全的基石。Damgard, Haagh, and Orlandi (TCC 2016)引入了访问控制加密(ACE),该加密在不泄露发送方,接收方或加密流量内容的情况下强制执行无读和无写规则。针对任意策略(涵盖所有读/写关系的可能性)的现有ACE设计依赖于不可区分混淆或基于格的假设,使用指数大小的密文或策略的电路实现。此外,他们的设计要求一个私人消毒密钥,以保持永久在线进行消毒。唯一能够提供公共杀毒程序密钥的现有方案只支持简单的策略。总而言之,最先进的ACE方案最多只具有以下两个理想的特性:任意策略、恒定大小(密文)和高效(消毒)。本文介绍了一种不带消毒密钥的任意策略的ACE方案,解决了Kim和Wu (Asiacrypt 2017)提出的开放性问题。我们还提出了跨域ACE的概念,将密钥生成器分为发送方-授权方和接收方-授权方。我们的方案需要结构保持签名、非交互式零知识证明和基于身份的广播加密作为构建块。它可以直接从基于配对的假设中实例化,并且具有恒定的密文大小。我们还对我们的方案进行了原型设计,并证明了它的实际效率。
{"title":"Cross-Domain Access Control Encryption: Arbitrary-policy, Constant-size, Efficient","authors":"Xiuhua Wang, Sherman S. M. Chow","doi":"10.1109/SP40001.2021.00023","DOIUrl":"https://doi.org/10.1109/SP40001.2021.00023","url":null,"abstract":"Access control is a fundamental keystone in security. Damgard, Haagh, and Orlandi (TCC 2016) introduced access˚ control encryption (ACE) that enforces no-read and no-write rules without revealing the senders, receivers, or the content of the encrypted traffic. Existing designs of ACE for arbitrary policy (covering all possibilities of read/write relationship) rely on indistinguishability obfuscation or lattice-based assumptions, with either exponential-size ciphertexts or circuit realization of policy. Also, their designs mandate a private sanitizer key to remain perpetually online for sanitization. The only existing scheme that can afford a public sanitizer key supports only simple policies. To summarize, state-of-the-art ACE schemes only feature at most two of the following desirable properties: arbitrarypolicy, constant-size (ciphertext), and efficient (sanitization). This paper introduces an ACE scheme for arbitrary policy without sanitizer key, which solves the open question posed by Kim and Wu (Asiacrypt 2017). We also put forth the notion of cross-domain ACE, separating the key generator into the sender-authority and receiver-authority. Our scheme requires structure-preserving signatures, non-interactive zero-knowledge proof, and sanitizable identity-based broadcast encryption as the building blocks. It can be instantiated directly from pairing-based assumptions and features constant ciphertext size. We also prototyped our scheme and demonstrated its practical efficiency.","PeriodicalId":6786,"journal":{"name":"2021 IEEE Symposium on Security and Privacy (SP)","volume":"27 1","pages":"748-761"},"PeriodicalIF":0.0,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88729521","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
DifuzzRTL: Differential Fuzz Testing to Find CPU Bugs DifuzzRTL:鉴别模糊测试来发现CPU bug
Pub Date : 2021-05-01 DOI: 10.1109/SP40001.2021.00103
Jaewon Hur, Suhwan Song, Dongup Kwon, Eun-Tae Baek, Jangwoo Kim, Byoungyoung Lee
Security bugs in CPUs have critical security impacts to all the computation related hardware and software components as it is the core of the computation. In spite of the fact that architecture and security communities have explored a vast number of static or dynamic analysis techniques to automatically identify such bugs, the problem remains unsolved and challenging largely due to the complex nature of CPU RTL designs.This paper proposes DIFUZZRTL, an RTL fuzzer to automatically discover unknown bugs in CPU RTLs. DIFUZZRTL develops a register-coverage guided fuzzing technique, which efficiently yet correctly identifies a state transition in the finite state machine of RTL designs. DIFUZZRTL also develops several new techniques in consideration of unique RTL design characteristics, including cycle-sensitive register coverage guiding, asynchronous interrupt events handling, a unified CPU input format with Tilelink protocols, and drop-in-replacement designs to support various CPU RTLs. We implemented DIFUZZRTL, and performed the evaluation with three real-world open source CPU RTLs: OpenRISC Mor1kx Cappuccino, RISC-V Rocket Core, and RISC-V Boom Core. During the evaluation, DIFUZZRTL identified 16 new bugs from these CPU RTLs, all of which were confirmed by the respective development communities and vendors. Six of those are assigned with CVE numbers, and to the best of our knowledge, we reported the first and the only CVE of RISC-V cores, demonstrating its strong practical impacts to the security community.
cpu是计算的核心,其安全漏洞对所有与计算相关的硬件和软件组件都有重要的安全影响。尽管架构和安全社区已经探索了大量的静态或动态分析技术来自动识别这些错误,但由于CPU RTL设计的复杂性,这个问题仍然没有解决,而且具有挑战性。本文提出了一种自动发现CPU RTL中未知错误的RTL模糊器DIFUZZRTL。DIFUZZRTL开发了一种寄存器覆盖引导模糊技术,可以有效而正确地识别RTL设计中有限状态机中的状态转换。DIFUZZRTL还开发了几种新技术,考虑到独特的RTL设计特征,包括周期敏感的寄存器覆盖引导,异步中断事件处理,具有Tilelink协议的统一CPU输入格式,以及支持各种CPU RTL的插入式替换设计。我们实现了DIFUZZRTL,并使用三个真实世界的开源CPU rtl进行了评估:OpenRISC Mor1kx Cappuccino, RISC-V Rocket Core和RISC-V Boom Core。在评估期间,DIFUZZRTL从这些CPU rtl中发现了16个新bug,所有这些bug都得到了各自开发社区和供应商的确认。其中六个被分配了CVE编号,据我们所知,我们报告了第一个也是唯一一个RISC-V内核的CVE,证明了它对安全社区的强大实际影响。
{"title":"DifuzzRTL: Differential Fuzz Testing to Find CPU Bugs","authors":"Jaewon Hur, Suhwan Song, Dongup Kwon, Eun-Tae Baek, Jangwoo Kim, Byoungyoung Lee","doi":"10.1109/SP40001.2021.00103","DOIUrl":"https://doi.org/10.1109/SP40001.2021.00103","url":null,"abstract":"Security bugs in CPUs have critical security impacts to all the computation related hardware and software components as it is the core of the computation. In spite of the fact that architecture and security communities have explored a vast number of static or dynamic analysis techniques to automatically identify such bugs, the problem remains unsolved and challenging largely due to the complex nature of CPU RTL designs.This paper proposes DIFUZZRTL, an RTL fuzzer to automatically discover unknown bugs in CPU RTLs. DIFUZZRTL develops a register-coverage guided fuzzing technique, which efficiently yet correctly identifies a state transition in the finite state machine of RTL designs. DIFUZZRTL also develops several new techniques in consideration of unique RTL design characteristics, including cycle-sensitive register coverage guiding, asynchronous interrupt events handling, a unified CPU input format with Tilelink protocols, and drop-in-replacement designs to support various CPU RTLs. We implemented DIFUZZRTL, and performed the evaluation with three real-world open source CPU RTLs: OpenRISC Mor1kx Cappuccino, RISC-V Rocket Core, and RISC-V Boom Core. During the evaluation, DIFUZZRTL identified 16 new bugs from these CPU RTLs, all of which were confirmed by the respective development communities and vendors. Six of those are assigned with CVE numbers, and to the best of our knowledge, we reported the first and the only CVE of RISC-V cores, demonstrating its strong practical impacts to the security community.","PeriodicalId":6786,"journal":{"name":"2021 IEEE Symposium on Security and Privacy (SP)","volume":"19 1","pages":"1286-1303"},"PeriodicalIF":0.0,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79524775","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 39
Post-quantum WireGuard Post-quantum WireGuard
Pub Date : 2021-05-01 DOI: 10.1109/SP40001.2021.00030
Andreas Hülsing, Kai-Chun Ning, P. Schwabe, F. Weber, Philip Zimmermann
In this paper we present PQ-WireGuard, a post-quantum variant of the handshake in the WireGuard VPN protocol (NDSS 2017). Unlike most previous work on post-quantum security for real-world protocols, this variant does not only consider post-quantum confidentiality (or forward secrecy) but also post-quantum authentication. To achieve this, we replace the Diffie-Hellman-based handshake by a more generic approach only using key-encapsulation mechanisms (KEMs). We establish security of PQ-WireGuard, adapting the security proofs for WireGuard in the symbolic model and in the standard model to our construction. We then instantiate this generic construction with concrete post-quantum secure KEMs, which we carefully select to achieve high security and speed. We demonstrate competitiveness of PQ-WireGuard presenting extensive bench-marking results comparing to widely deployed VPN solutions.
在本文中,我们提出了PQ-WireGuard,这是WireGuard VPN协议中握手的后量子变体(NDSS 2017)。与之前大多数关于现实世界协议的后量子安全的工作不同,这种变体不仅考虑了后量子机密性(或前向保密),还考虑了后量子认证。为了实现这一点,我们用一种更通用的方法取代了基于diffie - hellman的握手,这种方法只使用密钥封装机制(kem)。我们建立了PQ-WireGuard的安全性,将符号模型和标准模型中WireGuard的安全性证明应用到我们的构建中。然后,我们用具体的后量子安全kem实例化这种通用结构,我们仔细选择以实现高安全性和高速度。我们展示了PQ-WireGuard的竞争力,与广泛部署的VPN解决方案相比,提供了广泛的基准测试结果。
{"title":"Post-quantum WireGuard","authors":"Andreas Hülsing, Kai-Chun Ning, P. Schwabe, F. Weber, Philip Zimmermann","doi":"10.1109/SP40001.2021.00030","DOIUrl":"https://doi.org/10.1109/SP40001.2021.00030","url":null,"abstract":"In this paper we present PQ-WireGuard, a post-quantum variant of the handshake in the WireGuard VPN protocol (NDSS 2017). Unlike most previous work on post-quantum security for real-world protocols, this variant does not only consider post-quantum confidentiality (or forward secrecy) but also post-quantum authentication. To achieve this, we replace the Diffie-Hellman-based handshake by a more generic approach only using key-encapsulation mechanisms (KEMs). We establish security of PQ-WireGuard, adapting the security proofs for WireGuard in the symbolic model and in the standard model to our construction. We then instantiate this generic construction with concrete post-quantum secure KEMs, which we carefully select to achieve high security and speed. We demonstrate competitiveness of PQ-WireGuard presenting extensive bench-marking results comparing to widely deployed VPN solutions.","PeriodicalId":6786,"journal":{"name":"2021 IEEE Symposium on Security and Privacy (SP)","volume":"12 1","pages":"304-321"},"PeriodicalIF":0.0,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82758785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 36
CrawlPhish: Large-scale Analysis of Client-side Cloaking Techniques in Phishing CrawlPhish:网络钓鱼客户端伪装技术的大规模分析
Pub Date : 2021-05-01 DOI: 10.1109/SP40001.2021.00021
Penghui Zhang, Adam Oest, Haehyun Cho, Zhibo Sun, RC Johnson, Brad Wardman, Shaown Sarker, A. Kapravelos, Tiffany Bao, Ruoyu Wang, Yan Shoshitaishvili, Adam Doupé, Gail-Joon Ahn
Phishing is a critical threat to Internet users. Although an extensive ecosystem serves to protect users, phishing websites are growing in sophistication, and they can slip past the ecosystem’s detection systems—and subsequently cause real-world damage—with the help of evasion techniques. Sophisticated client-side evasion techniques, known as cloaking, leverage JavaScript to enable complex interactions between potential victims and the phishing website, and can thus be particularly effective in slowing or entirely preventing automated mitigations. Yet, neither the prevalence nor the impact of client-side cloaking has been studied.In this paper, we present CrawlPhish, a framework for automatically detecting and categorizing client-side cloaking used by known phishing websites. We deploy CrawlPhish over 14 months between 2018 and 2019 to collect and thoroughly analyze a dataset of 112,005 phishing websites in the wild. By adapting state-of-the-art static and dynamic code analysis, we find that 35,067 of these websites have 1,128 distinct implementations of client-side cloaking techniques. Moreover, we find that attackers’ use of cloaking grew from 23.32% initially to 33.70% by the end of our data collection period. Detection of cloaking by our framework exhibited low false-positive and false-negative rates of 1.45% and 1.75%, respectively. We analyze the semantics of the techniques we detected and propose a taxonomy of eight types of evasion across three high-level categories: User Interaction, Fingerprinting, and Bot Behavior.Using 150 artificial phishing websites, we empirically show that each category of evasion technique is effective in avoiding browser-based phishing detection (a key ecosystem defense). Additionally, through a user study, we verify that the techniques generally do not discourage victim visits. Therefore, we propose ways in which our methodology can be used to not only improve the ecosystem’s ability to mitigate phishing websites with client-side cloaking, but also continuously identify emerging cloaking techniques as they are launched by attackers.
网络钓鱼是对互联网用户的严重威胁。尽管有一个广泛的生态系统来保护用户,但网络钓鱼网站越来越复杂,它们可以通过生态系统的检测系统,并在逃避技术的帮助下造成现实世界的破坏。复杂的客户端规避技术,称为隐形,利用JavaScript在潜在受害者和网络钓鱼网站之间实现复杂的交互,因此可以特别有效地减缓或完全阻止自动缓解。然而,客户端隐形的流行程度和影响都没有被研究过。在本文中,我们提出了CrawlPhish,一个用于自动检测和分类已知网络钓鱼网站使用的客户端伪装的框架。我们在2018年至2019年之间的14个月内部署了CrawlPhish,以收集并彻底分析野外112,005个网络钓鱼网站的数据集。通过采用最先进的静态和动态代码分析,我们发现这些网站中有35,067个有1,128种不同的客户端隐身技术实现。此外,我们发现攻击者使用隐形技术的比例从最初的23.32%增长到数据收集期结束时的33.70%。该检测框架对伪装的假阳性率和假阴性率较低,分别为1.45%和1.75%。我们分析了我们检测到的技术的语义,并提出了跨三个高级类别的八种逃避类型的分类:用户交互、指纹和Bot行为。使用150个人工网络钓鱼网站,我们的经验表明,每种规避技术都有效地避免了基于浏览器的网络钓鱼检测(一种关键的生态系统防御)。此外,通过一项用户研究,我们证实这些技术通常不会阻止受害者的访问。因此,我们提出了一些方法,其中我们的方法不仅可以用来提高生态系统的能力,以减轻客户端伪装的网络钓鱼网站,而且还可以在攻击者发起时不断识别新兴的伪装技术。
{"title":"CrawlPhish: Large-scale Analysis of Client-side Cloaking Techniques in Phishing","authors":"Penghui Zhang, Adam Oest, Haehyun Cho, Zhibo Sun, RC Johnson, Brad Wardman, Shaown Sarker, A. Kapravelos, Tiffany Bao, Ruoyu Wang, Yan Shoshitaishvili, Adam Doupé, Gail-Joon Ahn","doi":"10.1109/SP40001.2021.00021","DOIUrl":"https://doi.org/10.1109/SP40001.2021.00021","url":null,"abstract":"Phishing is a critical threat to Internet users. Although an extensive ecosystem serves to protect users, phishing websites are growing in sophistication, and they can slip past the ecosystem’s detection systems—and subsequently cause real-world damage—with the help of evasion techniques. Sophisticated client-side evasion techniques, known as cloaking, leverage JavaScript to enable complex interactions between potential victims and the phishing website, and can thus be particularly effective in slowing or entirely preventing automated mitigations. Yet, neither the prevalence nor the impact of client-side cloaking has been studied.In this paper, we present CrawlPhish, a framework for automatically detecting and categorizing client-side cloaking used by known phishing websites. We deploy CrawlPhish over 14 months between 2018 and 2019 to collect and thoroughly analyze a dataset of 112,005 phishing websites in the wild. By adapting state-of-the-art static and dynamic code analysis, we find that 35,067 of these websites have 1,128 distinct implementations of client-side cloaking techniques. Moreover, we find that attackers’ use of cloaking grew from 23.32% initially to 33.70% by the end of our data collection period. Detection of cloaking by our framework exhibited low false-positive and false-negative rates of 1.45% and 1.75%, respectively. We analyze the semantics of the techniques we detected and propose a taxonomy of eight types of evasion across three high-level categories: User Interaction, Fingerprinting, and Bot Behavior.Using 150 artificial phishing websites, we empirically show that each category of evasion technique is effective in avoiding browser-based phishing detection (a key ecosystem defense). Additionally, through a user study, we verify that the techniques generally do not discourage victim visits. Therefore, we propose ways in which our methodology can be used to not only improve the ecosystem’s ability to mitigate phishing websites with client-side cloaking, but also continuously identify emerging cloaking techniques as they are launched by attackers.","PeriodicalId":6786,"journal":{"name":"2021 IEEE Symposium on Security and Privacy (SP)","volume":"23 1","pages":"1109-1124"},"PeriodicalIF":0.0,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78875922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 49
DynPTA: Combining Static and Dynamic Analysis for Practical Selective Data Protection DynPTA:结合静态和动态分析的实际选择性数据保护
Pub Date : 2021-05-01 DOI: 10.1109/SP40001.2021.00082
Tapti Palit, Jarin Firose Moon, F. Monrose, M. Polychronakis
As control flow hijacking attacks become more challenging due to the deployment of various exploit mitigation technologies, the leakage of sensitive process data through the exploitation of memory disclosure vulnerabilities is becoming an increasingly important threat. To make matters worse, recently introduced transient execution attacks provide a new avenue for leaking confidential process data. As a response, various approaches for selectively protecting subsets of critical in-memory data have been proposed, which though either require a significant code refactoring effort, or do not scale for large applications.In this paper we present DynPTA, a selective data protection approach that combines static analysis with scoped dynamic data flow tracking (DFT) to keep a subset of manually annotated sensitive data always encrypted in memory. DynPTA ameliorates the inherent overapproximation of pointer analysis—a significant challenge that has prevented previous approaches from supporting large applications—by relying on lightweight label lookups to determine if potentially sensitive data is actually sensitive. Labeled objects are tracked only within the subset of value flows that may carry potentially sensitive data, requiring only a fraction of the program’s code to be instrumented for DFT. We experimentally evaluated DynPTA with real-world applications and demonstrate that it can prevent memory disclosure (Heartbleed) and transient execution (Spectre) attacks from leaking the protected data, while incurring a modest runtime overhead of up to 19.2% when protecting the private TLS key of Nginx with OpenSSL.
由于部署了各种漏洞缓解技术,控制流劫持攻击变得更具挑战性,通过利用内存披露漏洞泄露敏感流程数据正成为越来越重要的威胁。更糟糕的是,最近引入的瞬态执行攻击为泄露机密进程数据提供了新的途径。作为回应,人们提出了各种有选择地保护内存中关键数据子集的方法,尽管这些方法要么需要大量的代码重构工作,要么无法扩展到大型应用程序。在本文中,我们提出了DynPTA,这是一种选择性数据保护方法,它将静态分析与范围动态数据流跟踪(DFT)相结合,以保持手动注释的敏感数据子集始终在内存中加密。DynPTA通过依赖轻量级标签查找来确定潜在的敏感数据是否真正敏感,改善了指针分析固有的过度逼近——这是以前的方法无法支持大型应用程序的一个重大挑战。标记的对象仅在可能携带潜在敏感数据的价值流子集中被跟踪,只需要对程序代码的一小部分进行DFT检测。我们在实际应用中对DynPTA进行了实验评估,并证明它可以防止内存泄露(Heartbleed)和瞬态执行(Spectre)攻击泄露受保护的数据,同时在使用OpenSSL保护Nginx的私有TLS密钥时,会产生高达19.2%的适度运行时开销。
{"title":"DynPTA: Combining Static and Dynamic Analysis for Practical Selective Data Protection","authors":"Tapti Palit, Jarin Firose Moon, F. Monrose, M. Polychronakis","doi":"10.1109/SP40001.2021.00082","DOIUrl":"https://doi.org/10.1109/SP40001.2021.00082","url":null,"abstract":"As control flow hijacking attacks become more challenging due to the deployment of various exploit mitigation technologies, the leakage of sensitive process data through the exploitation of memory disclosure vulnerabilities is becoming an increasingly important threat. To make matters worse, recently introduced transient execution attacks provide a new avenue for leaking confidential process data. As a response, various approaches for selectively protecting subsets of critical in-memory data have been proposed, which though either require a significant code refactoring effort, or do not scale for large applications.In this paper we present DynPTA, a selective data protection approach that combines static analysis with scoped dynamic data flow tracking (DFT) to keep a subset of manually annotated sensitive data always encrypted in memory. DynPTA ameliorates the inherent overapproximation of pointer analysis—a significant challenge that has prevented previous approaches from supporting large applications—by relying on lightweight label lookups to determine if potentially sensitive data is actually sensitive. Labeled objects are tracked only within the subset of value flows that may carry potentially sensitive data, requiring only a fraction of the program’s code to be instrumented for DFT. We experimentally evaluated DynPTA with real-world applications and demonstrate that it can prevent memory disclosure (Heartbleed) and transient execution (Spectre) attacks from leaking the protected data, while incurring a modest runtime overhead of up to 19.2% when protecting the private TLS key of Nginx with OpenSSL.","PeriodicalId":6786,"journal":{"name":"2021 IEEE Symposium on Security and Privacy (SP)","volume":"1 1","pages":"1919-1937"},"PeriodicalIF":0.0,"publicationDate":"2021-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89619818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
期刊
2021 IEEE Symposium on Security and Privacy (SP)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1