首页 > 最新文献

Proceedings of the ... USENIX Security Symposium. UNIX Security Symposium最新文献

英文 中文
Research on the Security of Visual Reasoning CAPTCHA 视觉推理验证码安全性研究
Pub Date : 2023-11-01 DOI: 10.1109/tdsc.2023.3238408
Yipeng Gao, Haichang Gao, Sainan Luo, Yang Zi, Shudong Zhang, Wenjie Mao, Ping Wang, Yulong Shen, Jeff Yan
CAPTCHA is an effective mechanism for protecting computers from malicious bots. With the development of deep learning techniques, current mainstream text-based and traditional image-based CAPTCHAs have been proven to be insecure. Therefore, a major effort has been directed toward developing new CAPTCHAs by utilizing some other hard Artificial Intelligence (AI) problems. Recently, some commercial companies (Tencent, NetEase, Geetest, etc.) have begun deploying a new type of CAPTCHA based on visual reasoning to defend against bots. As a newly proposed CAPTCHA, it is therefore natural to ask a fundamental question: are visual reasoning CAPTCHAs as secure as their designers expect? This paper explores the security of visual reasoning CAPTCHAs. We proposed a modular attack and evaluated it on six different real-world visual reasoning CAPTCHAs, which achieved overall success rates ranging from 79.2% to 98.6%. The results show that visual reasoning CAPTCHAs are not as secure as anticipated; this latest effort to use novel, hard AI problems for CAPTCHAs has not yet succeeded. Then, we summarize some guidelines for designing better visual-based CAPTCHAs, and based on the lessons we learned from our attacks, we propose a new CAPTCHA based on commonsense knowledge (CsCAPTCHA) and show its security and usability experimentally.
验证码是保护计算机免受恶意机器人攻击的有效机制。随着深度学习技术的发展,目前主流的基于文本的验证码已被证明是不安全的。因此,主要的工作方向是开发基于图像的captcha,而基于图像的视觉推理正在成为这种发展的新方向。最近,腾讯部署了视觉图灵测试(VTT)验证码。这似乎是视觉推理方案的首次应用。随后,其他CAPTCHA服务商(Geetest,网易,钉向等)也提出了自己的视觉推理方案来防御机器人。因此,很自然地要问一个基本问题:视觉推理验证码是否像其设计者所期望的那样安全?本文提出了解决视觉推理验证码的第一次尝试。我们实施了整体攻击和模块化攻击,对VTT CAPTCHA的总体成功率分别为67.3%和88.0%。结果表明,视觉推理验证码并不像预期的那样安全;这一最新的尝试是在验证码中使用新颖、困难的人工智能问题,但尚未成功。根据我们从攻击中吸取的教训,我们还提供了一些设计具有更好安全性的视觉captcha的指导方针。
{"title":"Research on the Security of Visual Reasoning CAPTCHA","authors":"Yipeng Gao, Haichang Gao, Sainan Luo, Yang Zi, Shudong Zhang, Wenjie Mao, Ping Wang, Yulong Shen, Jeff Yan","doi":"10.1109/tdsc.2023.3238408","DOIUrl":"https://doi.org/10.1109/tdsc.2023.3238408","url":null,"abstract":"CAPTCHA is an effective mechanism for protecting computers from malicious bots. With the development of deep learning techniques, current mainstream text-based and traditional image-based CAPTCHAs have been proven to be insecure. Therefore, a major effort has been directed toward developing new CAPTCHAs by utilizing some other hard Artificial Intelligence (AI) problems. Recently, some commercial companies (Tencent, NetEase, Geetest, etc.) have begun deploying a new type of CAPTCHA based on visual reasoning to defend against bots. As a newly proposed CAPTCHA, it is therefore natural to ask a fundamental question: are visual reasoning CAPTCHAs as secure as their designers expect? This paper explores the security of visual reasoning CAPTCHAs. We proposed a modular attack and evaluated it on six different real-world visual reasoning CAPTCHAs, which achieved overall success rates ranging from 79.2% to 98.6%. The results show that visual reasoning CAPTCHAs are not as secure as anticipated; this latest effort to use novel, hard AI problems for CAPTCHAs has not yet succeeded. Then, we summarize some guidelines for designing better visual-based CAPTCHAs, and based on the lessons we learned from our attacks, we propose a new CAPTCHA based on commonsense knowledge (CsCAPTCHA) and show its security and usability experimentally.","PeriodicalId":91597,"journal":{"name":"Proceedings of the ... USENIX Security Symposium. UNIX Security Symposium","volume":"6 1","pages":"4976-4992"},"PeriodicalIF":0.0,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80178706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
A Highly Accurate Query-Recovery Attack against Searchable Encryption using Non-Indexed Documents 针对使用非索引文档的可搜索加密的高精度查询恢复攻击
Pub Date : 2023-06-27 DOI: 10.48550/arXiv.2306.15302
Marc Damie, Florian Hahn, Andreas Peter
Cloud data storage solutions offer customers cost-effective and reduced data management. While attractive, data security issues remain to be a core concern. Traditional encryption protects stored documents, but hinders simple functionalities such as keyword search. Therefore, searchable encryption schemes have been proposed to allow for the search on encrypted data. Efficient schemes leak at least the access pattern (the accessed documents per keyword search), which is known to be exploitable in query recovery attacks assuming the attacker has a significant amount of background knowledge on the stored documents. Existing attacks can only achieve decent results with strong adversary models (e.g. at least 20% of previously known documents or require additional knowledge such as on query frequencies) and they give no metric to evaluate the certainty of recovered queries. This hampers their practical utility and questions their relevance in the real-world. We propose a refined score attack which achieves query recovery rates of around 85% without requiring exact background knowledge on stored documents; a distributionally similar, but otherwise different (i.e., non-indexed), dataset suffices. The attack starts with very few known queries (around 10 known queries in our experiments over different datasets of varying size) and then iteratively recovers further queries with confidence scores by adding previously recovered queries that had high confidence scores to the set of known queries. Additional to high recovery rates, our approach yields interpretable results in terms of confidence scores.
云数据存储解决方案为客户提供经济高效的数据管理。虽然很有吸引力,但数据安全问题仍然是一个核心问题。传统的加密保护存储的文档,但会妨碍关键字搜索等简单功能。因此,提出了可搜索的加密方案,以允许对加密数据进行搜索。高效的模式至少会泄漏访问模式(每个关键字搜索访问的文档),假设攻击者对所存储的文档具有大量的背景知识,那么已知可以在查询恢复攻击中利用这一点。现有的攻击只能通过强大的对手模型获得不错的结果(例如,至少20%的先前已知文档或需要额外的知识,如查询频率),并且它们没有给出评估恢复查询的确定性的指标。这阻碍了它们的实际效用,并质疑它们在现实世界中的相关性。我们提出了一种精炼分数攻击,它可以在不需要存储文档的确切背景知识的情况下实现85%左右的查询恢复率;一个分布相似,但在其他方面不同(即,非索引)的数据集就足够了。攻击从很少的已知查询开始(在我们对不同大小的不同数据集的实验中大约有10个已知查询),然后通过将先前恢复的具有高置信度分数的查询添加到已知查询集,迭代地恢复具有置信度分数的进一步查询。除了高回收率之外,我们的方法在信心得分方面产生了可解释的结果。
{"title":"A Highly Accurate Query-Recovery Attack against Searchable Encryption using Non-Indexed Documents","authors":"Marc Damie, Florian Hahn, Andreas Peter","doi":"10.48550/arXiv.2306.15302","DOIUrl":"https://doi.org/10.48550/arXiv.2306.15302","url":null,"abstract":"Cloud data storage solutions offer customers cost-effective and reduced data management. While attractive, data security issues remain to be a core concern. Traditional encryption protects stored documents, but hinders simple functionalities such as keyword search. Therefore, searchable encryption schemes have been proposed to allow for the search on encrypted data. Efficient schemes leak at least the access pattern (the accessed documents per keyword search), which is known to be exploitable in query recovery attacks assuming the attacker has a significant amount of background knowledge on the stored documents. Existing attacks can only achieve decent results with strong adversary models (e.g. at least 20% of previously known documents or require additional knowledge such as on query frequencies) and they give no metric to evaluate the certainty of recovered queries. This hampers their practical utility and questions their relevance in the real-world. We propose a refined score attack which achieves query recovery rates of around 85% without requiring exact background knowledge on stored documents; a distributionally similar, but otherwise different (i.e., non-indexed), dataset suffices. The attack starts with very few known queries (around 10 known queries in our experiments over different datasets of varying size) and then iteratively recovers further queries with confidence scores by adding previously recovered queries that had high confidence scores to the set of known queries. Additional to high recovery rates, our approach yields interpretable results in terms of confidence scores.","PeriodicalId":91597,"journal":{"name":"Proceedings of the ... USENIX Security Symposium. UNIX Security Symposium","volume":"13 1","pages":"143-160"},"PeriodicalIF":0.0,"publicationDate":"2023-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87322446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Hot Pixels: Frequency, Power, and Temperature Attacks on GPUs and ARM SoCs 热像素:gpu和ARM soc的频率,功率和温度攻击
Pub Date : 2023-05-22 DOI: 10.48550/arXiv.2305.12784
Hritvik Taneja, Jason Kim, Jie Xu, S. V. Schaik, Daniel Genkin, Y. Yarom
The drive to create thinner, lighter, and more energy efficient devices has resulted in modern SoCs being forced to balance a delicate tradeoff between power consumption, heat dissipation, and execution speed (i.e., frequency). While beneficial, these DVFS mechanisms have also resulted in software-visible hybrid side-channels, which use software to probe analog properties of computing devices. Such hybrid attacks are an emerging threat that can bypass countermeasures for traditional microarchitectural side-channel attacks. Given the rise in popularity of both Arm SoCs and GPUs, in this paper we investigate the susceptibility of these devices to information leakage via power, temperature and frequency, as measured via internal sensors. We demonstrate that the sensor data observed correlates with both instructions executed and data processed, allowing us to mount software-visible hybrid side-channel attacks on these devices. To demonstrate the real-world impact of this issue, we present JavaScript-based pixel stealing and history sniffing attacks on Chrome and Safari, with all side channel countermeasures enabled. Finally, we also show website fingerprinting attacks, without any elevated privileges.
为了创造更薄、更轻、更节能的器件,现代soc被迫在功耗、散热和执行速度(即频率)之间进行微妙的权衡。虽然这些DVFS机制是有益的,但也导致了软件可见的混合侧信道,它使用软件来探测计算设备的模拟属性。这种混合攻击是一种新兴的威胁,可以绕过传统微架构侧信道攻击的对策。鉴于Arm soc和gpu的普及程度的上升,在本文中,我们研究了这些设备对通过内部传感器测量的功率,温度和频率的信息泄漏的敏感性。我们证明了观察到的传感器数据与执行的指令和处理的数据相关,从而允许我们在这些设备上安装软件可见的混合侧信道攻击。为了演示这个问题对现实世界的影响,我们在Chrome和Safari上展示了基于javascript的像素窃取和历史嗅探攻击,并启用了所有侧通道对策。最后,我们还展示了网站指纹攻击,没有任何提升权限。
{"title":"Hot Pixels: Frequency, Power, and Temperature Attacks on GPUs and ARM SoCs","authors":"Hritvik Taneja, Jason Kim, Jie Xu, S. V. Schaik, Daniel Genkin, Y. Yarom","doi":"10.48550/arXiv.2305.12784","DOIUrl":"https://doi.org/10.48550/arXiv.2305.12784","url":null,"abstract":"The drive to create thinner, lighter, and more energy efficient devices has resulted in modern SoCs being forced to balance a delicate tradeoff between power consumption, heat dissipation, and execution speed (i.e., frequency). While beneficial, these DVFS mechanisms have also resulted in software-visible hybrid side-channels, which use software to probe analog properties of computing devices. Such hybrid attacks are an emerging threat that can bypass countermeasures for traditional microarchitectural side-channel attacks. Given the rise in popularity of both Arm SoCs and GPUs, in this paper we investigate the susceptibility of these devices to information leakage via power, temperature and frequency, as measured via internal sensors. We demonstrate that the sensor data observed correlates with both instructions executed and data processed, allowing us to mount software-visible hybrid side-channel attacks on these devices. To demonstrate the real-world impact of this issue, we present JavaScript-based pixel stealing and history sniffing attacks on Chrome and Safari, with all side channel countermeasures enabled. Finally, we also show website fingerprinting attacks, without any elevated privileges.","PeriodicalId":91597,"journal":{"name":"Proceedings of the ... USENIX Security Symposium. UNIX Security Symposium","volume":"10 1","pages":"6275-6292"},"PeriodicalIF":0.0,"publicationDate":"2023-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87323711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
PTW: Pivotal Tuning Watermarking for Pre-Trained Image Generators PTW:关键调整水印预训练图像生成器
Pub Date : 2023-04-14 DOI: 10.48550/arXiv.2304.07361
Nils Lukas, F. Kerschbaum
Deepfakes refer to content synthesized using deep generators, which, when misused, have the potential to erode trust in digital media. Synthesizing high-quality deepfakes requires access to large and complex generators only a few entities can train and provide. The threat is malicious users that exploit access to the provided model and generate harmful deepfakes without risking detection. Watermarking makes deepfakes detectable by embedding an identifiable code into the generator that is later extractable from its generated images. We propose Pivotal Tuning Watermarking (PTW), a method for watermarking pre-trained generators (i) three orders of magnitude faster than watermarking from scratch and (ii) without the need for any training data. We improve existing watermarking methods and scale to generators $4 times$ larger than related work. PTW can embed longer codes than existing methods while better preserving the generator's image quality. We propose rigorous, game-based definitions for robustness and undetectability and our study reveals that watermarking is not robust against an adaptive white-box attacker who has control over the generator's parameters. We propose an adaptive attack that can successfully remove any watermarking with access to only $200$ non-watermarked images. Our work challenges the trustworthiness of watermarking for deepfake detection when the parameters of a generator are available. Source code to reproduce our experiments is available at https://github.com/dnn-security/gan-watermark.
深度伪造指的是使用深度生成器合成的内容,如果使用不当,有可能侵蚀人们对数字媒体的信任。合成高质量的深度伪造需要使用大型和复杂的生成器,只有少数实体可以训练和提供。威胁是恶意用户利用对所提供模型的访问并生成有害的深度伪造,而不会冒被检测到的风险。水印通过将可识别的代码嵌入到生成器中,然后从生成的图像中提取,从而可以检测到深度伪造。我们提出了关键调谐水印(PTW),这是一种对预训练生成器进行水印的方法(i)比从头开始水印快三个数量级,(ii)不需要任何训练数据。我们改进了现有的水印方法,并扩展到比相关工作大4倍的生成器。PTW可以嵌入比现有方法更长的代码,同时更好地保持生成器的图像质量。我们提出了严格的、基于游戏的鲁棒性和不可检测性定义,我们的研究表明,水印对于控制生成器参数的自适应白盒攻击者来说是不鲁棒的。我们提出了一种自适应攻击,可以成功地删除任何水印,仅访问$200$非水印图像。当发生器参数可用时,我们的工作挑战了深度伪造检测水印的可信度。复制我们实验的源代码可在https://github.com/dnn-security/gan-watermark上获得。
{"title":"PTW: Pivotal Tuning Watermarking for Pre-Trained Image Generators","authors":"Nils Lukas, F. Kerschbaum","doi":"10.48550/arXiv.2304.07361","DOIUrl":"https://doi.org/10.48550/arXiv.2304.07361","url":null,"abstract":"Deepfakes refer to content synthesized using deep generators, which, when misused, have the potential to erode trust in digital media. Synthesizing high-quality deepfakes requires access to large and complex generators only a few entities can train and provide. The threat is malicious users that exploit access to the provided model and generate harmful deepfakes without risking detection. Watermarking makes deepfakes detectable by embedding an identifiable code into the generator that is later extractable from its generated images. We propose Pivotal Tuning Watermarking (PTW), a method for watermarking pre-trained generators (i) three orders of magnitude faster than watermarking from scratch and (ii) without the need for any training data. We improve existing watermarking methods and scale to generators $4 times$ larger than related work. PTW can embed longer codes than existing methods while better preserving the generator's image quality. We propose rigorous, game-based definitions for robustness and undetectability and our study reveals that watermarking is not robust against an adaptive white-box attacker who has control over the generator's parameters. We propose an adaptive attack that can successfully remove any watermarking with access to only $200$ non-watermarked images. Our work challenges the trustworthiness of watermarking for deepfake detection when the parameters of a generator are available. Source code to reproduce our experiments is available at https://github.com/dnn-security/gan-watermark.","PeriodicalId":91597,"journal":{"name":"Proceedings of the ... USENIX Security Symposium. UNIX Security Symposium","volume":"6 1 1","pages":"2241-2258"},"PeriodicalIF":0.0,"publicationDate":"2023-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90226976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Inductive Graph Unlearning 归纳图学习
Pub Date : 2023-04-06 DOI: 10.48550/arXiv.2304.03093
Cheng-Long Wang, Mengdi Huai, Di Wang
As a way to implement the"right to be forgotten"in machine learning, textit{machine unlearning} aims to completely remove the contributions and information of the samples to be deleted from a trained model without affecting the contributions of other samples. Recently, many frameworks for machine unlearning have been proposed, and most of them focus on image and text data. To extend machine unlearning to graph data, textit{GraphEraser} has been proposed. However, a critical issue is that textit{GraphEraser} is specifically designed for the transductive graph setting, where the graph is static and attributes and edges of test nodes are visible during training. It is unsuitable for the inductive setting, where the graph could be dynamic and the test graph information is invisible in advance. Such inductive capability is essential for production machine learning systems with evolving graphs like social media and transaction networks. To fill this gap, we propose the underline{{bf G}}underline{{bf U}}ided underline{{bf I}}nunderline{{bf D}}uctivunderline{{bf E}} Graph Unlearning framework (GUIDE). GUIDE consists of three components: guided graph partitioning with fairness and balance, efficient subgraph repair, and similarity-based aggregation. Empirically, we evaluate our method on several inductive benchmarks and evolving transaction graphs. Generally speaking, GUIDE can be efficiently implemented on the inductive graph learning tasks for its low graph partition cost, no matter on computation or structure information. The code will be available here: https://github.com/Happy2Git/GUIDE.
textit{机器学习}是机器学习中实现“被遗忘权”的一种方式,其目的是在不影响其他样本贡献的情况下,将待删除样本的贡献和信息从训练好的模型中完全移除。近年来,人们提出了许多机器学习框架,但大多数框架都集中在图像和文本数据上。为了将机器学习扩展到图形数据,提出了textit{刻字机}。然而,一个关键的问题是textit{刻字机}是专门为换能图设置设计的,其中图是静态的,测试节点的属性和边缘在训练期间是可见的。不适用于感应式设置,因为感应式设置的图形可能是动态的,且测试图形信息事先不可见。这种归纳能力对于具有进化图的生产机器学习系统(如社交媒体和交易网络)至关重要。为了填补这一空白,我们提出了underline{{bf g}}underline{{bf 你}} ide underline{{bf 在}}underline{{bf 延展性}}underline{{bf e}}图学习框架(GUIDE)。GUIDE由三个部分组成:具有公平性和平衡性的引导图分区、高效的子图修复和基于相似度的聚合。在经验上,我们在几个归纳基准和演进的事务图上评估了我们的方法。一般来说,无论是计算量还是结构信息,GUIDE都可以有效地实现在归纳图学习任务上。代码可以在这里获得:https://github.com/Happy2Git/GUIDE。
{"title":"Inductive Graph Unlearning","authors":"Cheng-Long Wang, Mengdi Huai, Di Wang","doi":"10.48550/arXiv.2304.03093","DOIUrl":"https://doi.org/10.48550/arXiv.2304.03093","url":null,"abstract":"As a way to implement the\"right to be forgotten\"in machine learning, textit{machine unlearning} aims to completely remove the contributions and information of the samples to be deleted from a trained model without affecting the contributions of other samples. Recently, many frameworks for machine unlearning have been proposed, and most of them focus on image and text data. To extend machine unlearning to graph data, textit{GraphEraser} has been proposed. However, a critical issue is that textit{GraphEraser} is specifically designed for the transductive graph setting, where the graph is static and attributes and edges of test nodes are visible during training. It is unsuitable for the inductive setting, where the graph could be dynamic and the test graph information is invisible in advance. Such inductive capability is essential for production machine learning systems with evolving graphs like social media and transaction networks. To fill this gap, we propose the underline{{bf G}}underline{{bf U}}ided underline{{bf I}}nunderline{{bf D}}uctivunderline{{bf E}} Graph Unlearning framework (GUIDE). GUIDE consists of three components: guided graph partitioning with fairness and balance, efficient subgraph repair, and similarity-based aggregation. Empirically, we evaluate our method on several inductive benchmarks and evolving transaction graphs. Generally speaking, GUIDE can be efficiently implemented on the inductive graph learning tasks for its low graph partition cost, no matter on computation or structure information. The code will be available here: https://github.com/Happy2Git/GUIDE.","PeriodicalId":91597,"journal":{"name":"Proceedings of the ... USENIX Security Symposium. UNIX Security Symposium","volume":"20 1","pages":"3205-3222"},"PeriodicalIF":0.0,"publicationDate":"2023-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81427183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Token-Level Fuzzing 标记级起毛
Pub Date : 2023-04-04 DOI: 10.48550/arXiv.2304.02103
Christopher Salls, Chani Jindal, Jake Corina, Chris A. Kruegel, G. Vigna
Fuzzing has become a commonly used approach to identifying bugs in complex, real-world programs. However, interpreters are notoriously difficult to fuzz effectively, as they expect highly structured inputs, which are rarely produced by most fuzzing mutations. For this class of programs, grammar-based fuzzing has been shown to be effective. Tools based on this approach can find bugs in the code that is executed after parsing the interpreter inputs, by following language-specific rules when generating and mutating test cases. Unfortunately, grammar-based fuzzing is often unable to discover subtle bugs associated with the parsing and handling of the language syntax. Additionally, if the grammar provided to the fuzzer is incomplete, or does not match the implementation completely, the fuzzer will fail to exercise important parts of the available functionality. In this paper, we propose a new fuzzing technique, called Token-Level Fuzzing. Instead of applying mutations either at the byte level or at the grammar level, Token-Level Fuzzing applies mutations at the token level. Evolutionary fuzzers can leverage this technique to both generate inputs that are parsed successfully and generate inputs that do not conform strictly to the grammar. As a result, the proposed approach can find bugs that neither byte-level fuzzing nor grammar-based fuzzing can find. We evaluated Token-Level Fuzzing by modifying AFL and fuzzing four popular JavaScript engines, finding 29 previously unknown bugs, several of which could not be found with state-of-the-art byte-level and grammar-based fuzzers.
模糊测试已经成为在复杂的、真实的程序中识别bug的常用方法。然而,解释器很难有效地模糊化,因为它们期望高度结构化的输入,而大多数模糊化突变很少产生这些输入。对于这类程序,基于语法的模糊测试已被证明是有效的。基于这种方法的工具在生成和改变测试用例时遵循特定于语言的规则,可以在解析解释器输入后执行的代码中找到错误。不幸的是,基于语法的模糊测试通常无法发现与语言语法解析和处理相关的细微错误。此外,如果提供给模糊器的语法不完整,或者与实现完全不匹配,则模糊器将无法执行可用功能的重要部分。在本文中,我们提出了一种新的模糊测试技术,称为令牌级模糊测试。记号级模糊不是在字节级或语法级应用突变,而是在记号级应用突变。进化模糊器可以利用这种技术生成成功解析的输入,也可以生成不严格符合语法的输入。因此,所提出的方法可以找到字节级模糊测试和基于语法的模糊测试都无法找到的错误。我们通过修改AFL和对四个流行的JavaScript引擎进行模糊测试来评估Token-Level Fuzzing,发现了29个以前未知的bug,其中一些是最先进的字节级和基于语法的模糊测试无法发现的。
{"title":"Token-Level Fuzzing","authors":"Christopher Salls, Chani Jindal, Jake Corina, Chris A. Kruegel, G. Vigna","doi":"10.48550/arXiv.2304.02103","DOIUrl":"https://doi.org/10.48550/arXiv.2304.02103","url":null,"abstract":"Fuzzing has become a commonly used approach to identifying bugs in complex, real-world programs. However, interpreters are notoriously difficult to fuzz effectively, as they expect highly structured inputs, which are rarely produced by most fuzzing mutations. For this class of programs, grammar-based fuzzing has been shown to be effective. Tools based on this approach can find bugs in the code that is executed after parsing the interpreter inputs, by following language-specific rules when generating and mutating test cases. Unfortunately, grammar-based fuzzing is often unable to discover subtle bugs associated with the parsing and handling of the language syntax. Additionally, if the grammar provided to the fuzzer is incomplete, or does not match the implementation completely, the fuzzer will fail to exercise important parts of the available functionality. In this paper, we propose a new fuzzing technique, called Token-Level Fuzzing. Instead of applying mutations either at the byte level or at the grammar level, Token-Level Fuzzing applies mutations at the token level. Evolutionary fuzzers can leverage this technique to both generate inputs that are parsed successfully and generate inputs that do not conform strictly to the grammar. As a result, the proposed approach can find bugs that neither byte-level fuzzing nor grammar-based fuzzing can find. We evaluated Token-Level Fuzzing by modifying AFL and fuzzing four popular JavaScript engines, finding 29 previously unknown bugs, several of which could not be found with state-of-the-art byte-level and grammar-based fuzzers.","PeriodicalId":91597,"journal":{"name":"Proceedings of the ... USENIX Security Symposium. UNIX Security Symposium","volume":"15 1","pages":"2795-2809"},"PeriodicalIF":0.0,"publicationDate":"2023-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72852196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Aegis: Mitigating Targeted Bit-flip Attacks against Deep Neural Networks Aegis:减轻针对深度神经网络的定向比特翻转攻击
Pub Date : 2023-02-27 DOI: 10.48550/arXiv.2302.13520
Jialai Wang, Ziyuan Zhang, Meiqi Wang, Han Qiu, Tianwei Zhang, Qi Li, Zongpeng Li, Tao Wei, Chao Zhang
Bit-flip attacks (BFAs) have attracted substantial attention recently, in which an adversary could tamper with a small number of model parameter bits to break the integrity of DNNs. To mitigate such threats, a batch of defense methods are proposed, focusing on the untargeted scenarios. Unfortunately, they either require extra trustworthy applications or make models more vulnerable to targeted BFAs. Countermeasures against targeted BFAs, stealthier and more purposeful by nature, are far from well established. In this work, we propose Aegis, a novel defense method to mitigate targeted BFAs. The core observation is that existing targeted attacks focus on flipping critical bits in certain important layers. Thus, we design a dynamic-exit mechanism to attach extra internal classifiers (ICs) to hidden layers. This mechanism enables input samples to early-exit from different layers, which effectively upsets the adversary's attack plans. Moreover, the dynamic-exit mechanism randomly selects ICs for predictions during each inference to significantly increase the attack cost for the adaptive attacks where all defense mechanisms are transparent to the adversary. We further propose a robustness training strategy to adapt ICs to the attack scenarios by simulating BFAs during the IC training phase, to increase model robustness. Extensive evaluations over four well-known datasets and two popular DNN structures reveal that Aegis could effectively mitigate different state-of-the-art targeted attacks, reducing attack success rate by 5-10$times$, significantly outperforming existing defense methods.
比特翻转攻击(Bit-flip attacks, bfa)近年来引起了广泛关注,攻击者可以通过篡改少量模型参数比特来破坏深度神经网络的完整性。为了减轻这种威胁,提出了一批针对非目标场景的防御方法。不幸的是,它们要么需要额外的可信应用程序,要么使模型更容易受到目标bfa的攻击。针对目标bfa的对抗措施,从本质上来说,更隐蔽、更有目的性,远没有得到很好的建立。在这项工作中,我们提出了一种新的防御方法Aegis来减轻目标BFAs。核心观察是,现有的目标攻击专注于翻转某些重要层的关键位。因此,我们设计了一个动态退出机制,将额外的内部分类器(ic)附加到隐藏层。这种机制使输入样本能够从不同的层提前退出,从而有效地打乱了对手的攻击计划。此外,动态退出机制在每次推理过程中随机选择预测ic,大大增加了自适应攻击的攻击成本,其中所有防御机制对对手都是透明的。我们进一步提出了一种鲁棒性训练策略,通过在集成电路训练阶段模拟BFAs,使集成电路适应攻击场景,以提高模型的鲁棒性。对四个知名数据集和两种流行的DNN结构的广泛评估表明,宙斯盾可以有效地缓解不同的最先进的目标攻击,将攻击成功率降低5-10倍,显著优于现有的防御方法。
{"title":"Aegis: Mitigating Targeted Bit-flip Attacks against Deep Neural Networks","authors":"Jialai Wang, Ziyuan Zhang, Meiqi Wang, Han Qiu, Tianwei Zhang, Qi Li, Zongpeng Li, Tao Wei, Chao Zhang","doi":"10.48550/arXiv.2302.13520","DOIUrl":"https://doi.org/10.48550/arXiv.2302.13520","url":null,"abstract":"Bit-flip attacks (BFAs) have attracted substantial attention recently, in which an adversary could tamper with a small number of model parameter bits to break the integrity of DNNs. To mitigate such threats, a batch of defense methods are proposed, focusing on the untargeted scenarios. Unfortunately, they either require extra trustworthy applications or make models more vulnerable to targeted BFAs. Countermeasures against targeted BFAs, stealthier and more purposeful by nature, are far from well established. In this work, we propose Aegis, a novel defense method to mitigate targeted BFAs. The core observation is that existing targeted attacks focus on flipping critical bits in certain important layers. Thus, we design a dynamic-exit mechanism to attach extra internal classifiers (ICs) to hidden layers. This mechanism enables input samples to early-exit from different layers, which effectively upsets the adversary's attack plans. Moreover, the dynamic-exit mechanism randomly selects ICs for predictions during each inference to significantly increase the attack cost for the adaptive attacks where all defense mechanisms are transparent to the adversary. We further propose a robustness training strategy to adapt ICs to the attack scenarios by simulating BFAs during the IC training phase, to increase model robustness. Extensive evaluations over four well-known datasets and two popular DNN structures reveal that Aegis could effectively mitigate different state-of-the-art targeted attacks, reducing attack success rate by 5-10$times$, significantly outperforming existing defense methods.","PeriodicalId":91597,"journal":{"name":"Proceedings of the ... USENIX Security Symposium. UNIX Security Symposium","volume":"42 1","pages":"2329-2346"},"PeriodicalIF":0.0,"publicationDate":"2023-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81285947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
PROVIDENCE: a Flexible Round-by-Round Risk-Limiting Audit 普罗维登斯:灵活的逐轮风险限制审计
Pub Date : 2022-10-17 DOI: 10.48550/arXiv.2210.08717
Oliver Broadrick, P. Vora, Filip Zag'orski
A Risk-Limiting Audit (RLA) is a statistical election tabulation audit with a rigorous error guarantee. We present ballot polling RLA PROVIDENCE, an audit with the efficiency of MINERVA and flexibility of BRAVO. We prove that PROVIDENCE is risk-limiting in the presence of an adversary who can choose subsequent round sizes given knowledge of previous samples. We describe a measure of audit workload as a function of the number of rounds, precincts touched, and ballots drawn.We quantify the problem of obtaining a misleading audit sample when rounds are too small, demonstrating the importance of the resulting constraint on audit planning. We present simulation results demonstrating the superiority of PROVIDENCE using these measures and describing an approach to planning audit round schedules. We describe the use of PROVIDENCE by the Rhode Island Board of Elections in a tabulation audit of the 2021 election. Our implementation of PROVIDENCE and audit planning tools in the open source R2B2 library should be useful to the states of Georgia and Pennsylvania, which are planning pre-certification ballot polling RLAs for the 2022 general election.
风险限制审计(RLA)是一种具有严格错误保证的统计选举制表审计。我们提出投票投票RLA普罗维登斯,审计与MINERVA的效率和BRAVO的灵活性。我们证明了在对手存在的情况下,普罗维登斯是风险限制的,对手可以在给定之前样本的知识的情况下选择后续的轮大小。我们将审计工作量的度量描述为轮数、涉及的选区和抽到的选票的函数。我们量化了当轮数过小时获得误导性审计样本的问题,证明由此产生的约束对审计计划的重要性。我们提出了模拟结果,证明了使用这些措施的普罗维登斯的优越性,并描述了一种规划审计轮时间表的方法。我们描述了罗德岛选举委员会在2021年选举的制表审计中使用PROVIDENCE。我们在开源R2B2库中实现的PROVIDENCE和审计规划工具应该对乔治亚州和宾夕法尼亚州有用,这两个州正在为2022年大选计划预认证投票投票RLAs。
{"title":"PROVIDENCE: a Flexible Round-by-Round Risk-Limiting Audit","authors":"Oliver Broadrick, P. Vora, Filip Zag'orski","doi":"10.48550/arXiv.2210.08717","DOIUrl":"https://doi.org/10.48550/arXiv.2210.08717","url":null,"abstract":"A Risk-Limiting Audit (RLA) is a statistical election tabulation audit with a rigorous error guarantee. We present ballot polling RLA PROVIDENCE, an audit with the efficiency of MINERVA and flexibility of BRAVO. We prove that PROVIDENCE is risk-limiting in the presence of an adversary who can choose subsequent round sizes given knowledge of previous samples. We describe a measure of audit workload as a function of the number of rounds, precincts touched, and ballots drawn.We quantify the problem of obtaining a misleading audit sample when rounds are too small, demonstrating the importance of the resulting constraint on audit planning. We present simulation results demonstrating the superiority of PROVIDENCE using these measures and describing an approach to planning audit round schedules. We describe the use of PROVIDENCE by the Rhode Island Board of Elections in a tabulation audit of the 2021 election. Our implementation of PROVIDENCE and audit planning tools in the open source R2B2 library should be useful to the states of Georgia and Pennsylvania, which are planning pre-certification ballot polling RLAs for the 2022 general election.","PeriodicalId":91597,"journal":{"name":"Proceedings of the ... USENIX Security Symposium. UNIX Security Symposium","volume":"30 1","pages":"6753-6770"},"PeriodicalIF":0.0,"publicationDate":"2022-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84650035","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
XDRI Attacks - and - How to Enhance Resilience of Residential Routers XDRI攻击——以及如何增强住宅路由器的弹性
Pub Date : 2022-08-25 DOI: 10.48550/arXiv.2208.12003
Philipp Jeitner, Haya Schulmann, Lucas Teichmann, M. Waidner
We explore the security of residential routers and find a range of critical vulnerabilities. Our evaluations show that 10 out of 36 popular routers are vulnerable to injections of fake records via misinterpretation of special characters. We also find that in 15 of the 36 routers the mechanisms, that are meant to prevent cache poisoning attacks, can be circumvented. In our Internet-wide study with an advertisement network, we identified and analyzed 976 residential routers used by web clients, out of which more than 95% were found vulnerable to our attacks. Overall, vulnerable routers are prevalent and are distributed among 177 countries and 4830 networks. To understand the core factors causing the vulnerabilities we perform black- and white-box analyses of the routers. We find that many problems can be attributed to incorrect assumptions on the protocols’ behaviour and the Internet, misunderstanding of the standard recommendations, bugs, and simplified DNS software implementations. We provide recommendations to mitigate our attacks. We also set up a tool to enable everyone to evaluate the security of their routers at https://xdi-attack.net/ .
我们探索了住宅路由器的安全性,并发现了一系列关键漏洞。我们的评估显示,在36个流行的路由器中,有10个容易通过误解特殊字符来注入虚假记录。我们还发现,在36台路由器中,有15台可以绕过旨在防止缓存中毒攻击的机制。在我们对广告网络的互联网范围研究中,我们确定并分析了网络客户端使用的976个住宅路由器,其中95%以上的路由器易受我们的攻击。总体而言,易受攻击的路由器非常普遍,分布在177个国家和4830个网络中。为了了解导致漏洞的核心因素,我们对路由器进行了黑盒和白盒分析。我们发现许多问题可归因于对协议行为和互联网的错误假设,对标准建议的误解,错误和简化的DNS软件实现。我们提供建议来减轻我们的攻击。我们还设置了一个工具,使每个人都能在https://xdi-attack.net/上评估其路由器的安全性。
{"title":"XDRI Attacks - and - How to Enhance Resilience of Residential Routers","authors":"Philipp Jeitner, Haya Schulmann, Lucas Teichmann, M. Waidner","doi":"10.48550/arXiv.2208.12003","DOIUrl":"https://doi.org/10.48550/arXiv.2208.12003","url":null,"abstract":"We explore the security of residential routers and find a range of critical vulnerabilities. Our evaluations show that 10 out of 36 popular routers are vulnerable to injections of fake records via misinterpretation of special characters. We also find that in 15 of the 36 routers the mechanisms, that are meant to prevent cache poisoning attacks, can be circumvented. In our Internet-wide study with an advertisement network, we identified and analyzed 976 residential routers used by web clients, out of which more than 95% were found vulnerable to our attacks. Overall, vulnerable routers are prevalent and are distributed among 177 countries and 4830 networks. To understand the core factors causing the vulnerabilities we perform black- and white-box analyses of the routers. We find that many problems can be attributed to incorrect assumptions on the protocols’ behaviour and the Internet, misunderstanding of the standard recommendations, bugs, and simplified DNS software implementations. We provide recommendations to mitigate our attacks. We also set up a tool to enable everyone to evaluate the security of their routers at https://xdi-attack.net/ .","PeriodicalId":91597,"journal":{"name":"Proceedings of the ... USENIX Security Symposium. UNIX Security Symposium","volume":"3 1","pages":"4473-4490"},"PeriodicalIF":0.0,"publicationDate":"2022-08-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78454980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
MPInspector: A Systematic and Automatic Approach for Evaluating the Security of IoT Messaging Protocols MPInspector:一种评估物联网消息协议安全性的系统和自动方法
Pub Date : 2022-08-18 DOI: 10.48550/arXiv.2208.08751
Qinying Wang, S. Ji, Yuan Tian, Xuhong Zhang, Binbin Zhao, Yu Kan, Zhaowei Lin, Changting Lin, Shuiguang Deng, A. Liu, R. Beyah
Facilitated by messaging protocols (MP), many home devices are connected to the Internet, bringing convenience and accessibility to customers. However, most deployed MPs on IoT platforms are fragmented and are not implemented carefully to support secure communication. To the best of our knowledge, there is no systematic solution to perform automatic security checks on MP implementations yet. To bridge the gap, we present MPInspector, the first automatic and systematic solution for vetting the security of MP implementations. MPInspector combines model learning with formal analysis and operates in three stages: (a) using parameter semantics extraction and interaction logic extraction to automatically infer the state machine of an MP implementation, (b) generating security properties based on meta properties and the state machine, and (c) applying automatic property based formal verification to identify property violations. We evaluate MPInspector on three popular MPs, including MQTT, CoAP and AMQP, implemented on nine leading IoT platforms. It identifies 252 property violations, leveraging which we further identify eleven types of attacks under two realistic attack scenarios. In addition, we demonstrate that MPInspector is lightweight (the average overhead of end-to-end analysis is ~4.5 hours) and effective with a precision of 100% in identifying property violations.
在消息传递协议(MP)的推动下,许多家庭设备连接到互联网,为客户带来便利和可访问性。然而,在物联网平台上部署的大多数MPs都是分散的,并且没有仔细实施以支持安全通信。据我们所知,目前还没有对MP实现执行自动安全检查的系统解决方案。为了弥补这一差距,我们提出了MPInspector,这是第一个用于审查MP实现安全性的自动和系统解决方案。MPInspector将模型学习与形式分析相结合,分三个阶段运行:(a)使用参数语义提取和交互逻辑提取来自动推断MP实现的状态机,(b)基于元属性和状态机生成安全属性,以及(c)应用基于属性的自动形式验证来识别属性违规。我们在九个领先的物联网平台上实施了三种流行的mp,包括MQTT, CoAP和AMQP,并对MPInspector进行了评估。它确定了252个财产违规,利用这些违规行为,我们进一步确定了两种现实攻击场景下的11种攻击类型。此外,我们还证明了MPInspector是轻量级的(端到端分析的平均开销约为4.5小时),并且在识别财产侵犯方面具有100%的精度。
{"title":"MPInspector: A Systematic and Automatic Approach for Evaluating the Security of IoT Messaging Protocols","authors":"Qinying Wang, S. Ji, Yuan Tian, Xuhong Zhang, Binbin Zhao, Yu Kan, Zhaowei Lin, Changting Lin, Shuiguang Deng, A. Liu, R. Beyah","doi":"10.48550/arXiv.2208.08751","DOIUrl":"https://doi.org/10.48550/arXiv.2208.08751","url":null,"abstract":"Facilitated by messaging protocols (MP), many home devices are connected to the Internet, bringing convenience and accessibility to customers. However, most deployed MPs on IoT platforms are fragmented and are not implemented carefully to support secure communication. To the best of our knowledge, there is no systematic solution to perform automatic security checks on MP implementations yet. To bridge the gap, we present MPInspector, the first automatic and systematic solution for vetting the security of MP implementations. MPInspector combines model learning with formal analysis and operates in three stages: (a) using parameter semantics extraction and interaction logic extraction to automatically infer the state machine of an MP implementation, (b) generating security properties based on meta properties and the state machine, and (c) applying automatic property based formal verification to identify property violations. We evaluate MPInspector on three popular MPs, including MQTT, CoAP and AMQP, implemented on nine leading IoT platforms. It identifies 252 property violations, leveraging which we further identify eleven types of attacks under two realistic attack scenarios. In addition, we demonstrate that MPInspector is lightweight (the average overhead of end-to-end analysis is ~4.5 hours) and effective with a precision of 100% in identifying property violations.","PeriodicalId":91597,"journal":{"name":"Proceedings of the ... USENIX Security Symposium. UNIX Security Symposium","volume":"20 1","pages":"4205-4222"},"PeriodicalIF":0.0,"publicationDate":"2022-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86092214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
期刊
Proceedings of the ... USENIX Security Symposium. UNIX Security Symposium
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1