首页 > 最新文献

2023 IEEE Security and Privacy Workshops (SPW)最新文献

英文 中文
Cryo-Mechanical RAM Content Extraction Against Modern Embedded Systems 基于现代嵌入式系统的低温机械RAM内容提取
Pub Date : 2023-05-01 DOI: 10.1109/SPW59333.2023.00030
Yuanzhe Wu, Grant Skipper, Ang Cui
Cryogenic mechanical memory extraction provides a means to obtain a device's volatile memory content at run-time. Numerous prior works has have demonstrated successful exploitation of the Memory Remanence Effect on modern computers and mobile devices. While this approach is arguably one of the most direct paths to reading a target device's physical RAM content, several significant limitations exist. For example, prior works were done either on removable memory with standardized connectors, or with the use of a custom kernel/bootloader. We present a generalized and automated system that performs reliable RAM content extraction against modern embedded devices. Our cryo-mechanical apparatus is built using low-cost hardware that is widely available, and supports target devices using single or multiple DDR1|2|3 memory modules. We discuss several novel techniques and hardware modifications that allow our apparatus to exceed the spatial and temporal precision required to reliably perform memory extraction against modern embedded systems that have memory modules soldered directly onto the PCB, and use custom memory controllers that spread bits of each word of memory across multiple physical RAM chips.
低温机械存储器提取提供了一种在运行时获得器件易失性存储器内容的方法。许多先前的工作已经证明了记忆残留效应在现代计算机和移动设备上的成功利用。虽然这种方法可以说是读取目标设备物理RAM内容的最直接的途径之一,但是存在一些明显的限制。例如,以前的工作是在带有标准化连接器的可移动内存上完成的,或者使用自定义内核/引导加载程序。我们提出了一种通用的自动化系统,可以针对现代嵌入式设备执行可靠的RAM内容提取。我们的低温机械设备使用广泛可用的低成本硬件构建,并支持使用单个或多个DDR1|2|3内存模块的目标设备。我们讨论了几种新技术和硬件修改,使我们的设备超越了对现代嵌入式系统可靠地执行内存提取所需的空间和时间精度,这些嵌入式系统将内存模块直接焊接到PCB上,并使用定制的内存控制器,将内存的每个字的位分散到多个物理RAM芯片上。
{"title":"Cryo-Mechanical RAM Content Extraction Against Modern Embedded Systems","authors":"Yuanzhe Wu, Grant Skipper, Ang Cui","doi":"10.1109/SPW59333.2023.00030","DOIUrl":"https://doi.org/10.1109/SPW59333.2023.00030","url":null,"abstract":"Cryogenic mechanical memory extraction provides a means to obtain a device's volatile memory content at run-time. Numerous prior works has have demonstrated successful exploitation of the Memory Remanence Effect on modern computers and mobile devices. While this approach is arguably one of the most direct paths to reading a target device's physical RAM content, several significant limitations exist. For example, prior works were done either on removable memory with standardized connectors, or with the use of a custom kernel/bootloader. We present a generalized and automated system that performs reliable RAM content extraction against modern embedded devices. Our cryo-mechanical apparatus is built using low-cost hardware that is widely available, and supports target devices using single or multiple DDR1|2|3 memory modules. We discuss several novel techniques and hardware modifications that allow our apparatus to exceed the spatial and temporal precision required to reliably perform memory extraction against modern embedded systems that have memory modules soldered directly onto the PCB, and use custom memory controllers that spread bits of each word of memory across multiple physical RAM chips.","PeriodicalId":308378,"journal":{"name":"2023 IEEE Security and Privacy Workshops (SPW)","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121363641","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Benchmarking the Effect of Poisoning Defenses on the Security and Bias of Deep Learning Models 对中毒防御对深度学习模型安全性和偏差的影响进行基准测试
Pub Date : 2023-05-01 DOI: 10.1109/SPW59333.2023.00010
N. Baracaldo, Farhan Ahmed, Kevin Eykholt, Yi Zhou, Shriti Priya, Taesung Lee, S. Kadhe, Mike Tan, Sridevi Polavaram, Sterling Suggs, Yuyang Gao, David Slater
Machine learning models are susceptible to a class of attacks known as adversarial poisoning where an adversary can maliciously manipulate training data to hinder model performance or, more concerningly, insert backdoors to exploit at inference time. Many methods have been proposed to defend against adversarial poisoning by either identifying the poisoned samples to facilitate removal or developing poison agnostic training algorithms. Although effective, these proposed approaches can have unintended consequences on the model, such as worsening performance on certain data sub-populations, thus inducing a classification bias. In this work, we evaluate several adversarial poisoning defenses. In addition to traditional security metrics, i.e., robustness to poisoned samples, we also adapt a fairness metric to measure the potential undesirable discrimination of sub-populations resulting from using these defenses. Our investigation highlights that many of the evaluated defenses trade decision fairness to achieve higher adversarial poisoning robustness. Given these results, we recommend our proposed metric to be part of standard evaluations of machine learning defenses.
机器学习模型容易受到一类被称为对抗性中毒的攻击,在这种攻击中,攻击者可以恶意操纵训练数据来阻碍模型的性能,或者更关心的是,在推理时插入后门来利用。已经提出了许多方法来防御对抗性中毒,通过识别中毒样本以方便去除或开发毒素不可知论训练算法。虽然这些建议的方法是有效的,但可能会对模型产生意想不到的后果,例如在某些数据子群上的性能恶化,从而导致分类偏差。在这项工作中,我们评估了几种对抗性中毒防御。除了传统的安全指标(即对中毒样本的鲁棒性)之外,我们还采用了一个公平指标来衡量使用这些防御措施对子种群的潜在不良歧视。我们的调查强调,许多评估防御交易决策公平,以实现更高的对抗性中毒鲁棒性。鉴于这些结果,我们建议将我们提出的指标作为机器学习防御标准评估的一部分。
{"title":"Benchmarking the Effect of Poisoning Defenses on the Security and Bias of Deep Learning Models","authors":"N. Baracaldo, Farhan Ahmed, Kevin Eykholt, Yi Zhou, Shriti Priya, Taesung Lee, S. Kadhe, Mike Tan, Sridevi Polavaram, Sterling Suggs, Yuyang Gao, David Slater","doi":"10.1109/SPW59333.2023.00010","DOIUrl":"https://doi.org/10.1109/SPW59333.2023.00010","url":null,"abstract":"Machine learning models are susceptible to a class of attacks known as adversarial poisoning where an adversary can maliciously manipulate training data to hinder model performance or, more concerningly, insert backdoors to exploit at inference time. Many methods have been proposed to defend against adversarial poisoning by either identifying the poisoned samples to facilitate removal or developing poison agnostic training algorithms. Although effective, these proposed approaches can have unintended consequences on the model, such as worsening performance on certain data sub-populations, thus inducing a classification bias. In this work, we evaluate several adversarial poisoning defenses. In addition to traditional security metrics, i.e., robustness to poisoned samples, we also adapt a fairness metric to measure the potential undesirable discrimination of sub-populations resulting from using these defenses. Our investigation highlights that many of the evaluated defenses trade decision fairness to achieve higher adversarial poisoning robustness. Given these results, we recommend our proposed metric to be part of standard evaluations of machine learning defenses.","PeriodicalId":308378,"journal":{"name":"2023 IEEE Security and Privacy Workshops (SPW)","volume":"110 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134183229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Is It Overkill? Analyzing Feature-Space Concept Drift in Malware Detectors 这是过度杀戮吗?恶意软件检测器中的特征空间概念漂移分析
Pub Date : 2023-05-01 DOI: 10.1109/SPW59333.2023.00007
Zhi Chen, Zhenning Zhang, Zeliang Kan, Limin Yang, Jacopo Cortellazzi, Feargus Pendlebury, Fabio Pierazzi, L. Cavallaro, Gang Wang
Concept drift is a major challenge faced by machine learning-based malware detectors when deployed in practice. While existing works have investigated methods to detect concept drift, it is not yet well understood regarding the main causes behind the drift. In this paper, we design experiments to empirically analyze the impact of feature-space drift (new features introduced by new samples) and compare it with data-space drift (data distribution shift over existing features). Surprisingly, we find that data-space drift is the dominating contributor to the model degradation over time while feature-space drift has little to no impact. This is consistently observed over both Android and PE malware detectors, with different feature types and feature engineering methods, across different settings. We further validate this observation with recent online learning based malware detectors that incrementally update the feature space. Our result indicates the possibility of handling concept drift without frequent feature updating, and we further discuss the open questions for future research.
概念漂移是基于机器学习的恶意软件检测器在实际部署时面临的主要挑战。虽然现有的工作已经研究了检测概念漂移的方法,但对于漂移背后的主要原因尚未得到很好的理解。在本文中,我们设计了实验来实证分析特征空间漂移(新样本引入的新特征)的影响,并将其与数据空间漂移(数据分布在现有特征上的移动)进行比较。令人惊讶的是,我们发现数据空间漂移是导致模型随时间退化的主要因素,而特征空间漂移几乎没有影响。在Android和PE恶意软件检测器中,使用不同的特征类型和特征工程方法,在不同的设置中都可以观察到这一点。我们用最近基于在线学习的恶意软件检测器进一步验证了这一观察结果,该检测器会逐步更新特征空间。我们的结果表明了在不频繁更新特征的情况下处理概念漂移的可能性,并进一步讨论了未来研究的开放性问题。
{"title":"Is It Overkill? Analyzing Feature-Space Concept Drift in Malware Detectors","authors":"Zhi Chen, Zhenning Zhang, Zeliang Kan, Limin Yang, Jacopo Cortellazzi, Feargus Pendlebury, Fabio Pierazzi, L. Cavallaro, Gang Wang","doi":"10.1109/SPW59333.2023.00007","DOIUrl":"https://doi.org/10.1109/SPW59333.2023.00007","url":null,"abstract":"Concept drift is a major challenge faced by machine learning-based malware detectors when deployed in practice. While existing works have investigated methods to detect concept drift, it is not yet well understood regarding the main causes behind the drift. In this paper, we design experiments to empirically analyze the impact of feature-space drift (new features introduced by new samples) and compare it with data-space drift (data distribution shift over existing features). Surprisingly, we find that data-space drift is the dominating contributor to the model degradation over time while feature-space drift has little to no impact. This is consistently observed over both Android and PE malware detectors, with different feature types and feature engineering methods, across different settings. We further validate this observation with recent online learning based malware detectors that incrementally update the feature space. Our result indicates the possibility of handling concept drift without frequent feature updating, and we further discuss the open questions for future research.","PeriodicalId":308378,"journal":{"name":"2023 IEEE Security and Privacy Workshops (SPW)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124525060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
PolyDoc: Surveying PDF Files from the PolySwarm network PolyDoc:调查来自polywarm网络的PDF文件
Pub Date : 2023-05-01 DOI: 10.1109/SPW59333.2023.00017
Prashant Anantharaman, R. Lathrop, Rebecca Shapiro, M. Locasto
Complex data formats implicitly demand complex logic to parse and apprehend them. The Portable Document Format (PDF) is among the most demanding formats because it is used as both a data exchange and presentation format, and it has a particularly stringent tradition of supporting in-teroperability and consistent presentation. These requirements create complexity that presents an opportunity for adversaries to encode a variety of exploits and attacks. To investigate whether there is an association between structural malforms and malice (using PDF files as the example challenge format), we built PolyDoc, a tool that conducts format-aware tracing of files pulled from the PolySwarm network. The PolySwarm network crowdsources threat intelligence by running files through several industry-scale threat-detection engines. The PolySwarm network provides a PolyScore, which indicates whether a file is safe or malicious, as judged by threat-detection engines. We ran PolyDoc in a live hunt mode to gather PDF files submitted to PolySwarm and then trace the execution of these PDF files through popular PDF tools such as Mutool, Poppler, and Caradoc. We collected and analyzed 58,906 files from PolySwarm. Further, we used the PDF Error Ontology to assign error categories based on tracer output and compared them to the PolyScore. Our work demonstrates three core insights. First, PDF files classified as malicious contain syntactic malformations. Second, “uncategorized” error ontology classes were common across our different PDF tools—demonstrating that the PDF Error Ontology may be underspecified for files that real-world threat engines receive. Finally, attackers leverage specific syntactic malformations in attacks: malformations that current PDF tools can detect.
复杂的数据格式隐含地需要复杂的逻辑来解析和理解它们。可移植文档格式(Portable Document Format, PDF)是要求最高的格式之一,因为它既可用作数据交换格式,也可用作表示格式,而且它在支持互操作性和一致表示方面有着特别严格的传统。这些需求带来了复杂性,为对手提供了编码各种利用和攻击的机会。为了调查结构畸形和恶意之间是否存在关联(使用PDF文件作为示例挑战格式),我们构建了PolyDoc,这是一个对从PolySwarm网络中提取的文件进行格式感知跟踪的工具。该网络通过几个行业规模的威胁检测引擎运行文件,将威胁情报众包。该网络提供了一个PolyScore,它表明一个文件是安全的还是恶意的,由威胁检测引擎判断。我们以实时搜索模式运行PolyDoc,收集提交给PolySwarm的PDF文件,然后通过流行的PDF工具(如Mutool, Poppler和Caradoc)跟踪这些PDF文件的执行情况。我们收集并分析了来自PolySwarm的58,906个文件。此外,我们使用PDF错误本体根据跟踪器输出分配错误类别,并将它们与PolyScore进行比较。我们的工作展示了三个核心见解。首先,被归类为恶意的PDF文件包含语法错误。其次,“未分类”的错误本体类在我们不同的PDF工具中很常见,这表明PDF错误本体可能没有为现实世界的威胁引擎接收的文件指定充分。最后,攻击者利用攻击中特定的语法错误:当前PDF工具可以检测到的错误。
{"title":"PolyDoc: Surveying PDF Files from the PolySwarm network","authors":"Prashant Anantharaman, R. Lathrop, Rebecca Shapiro, M. Locasto","doi":"10.1109/SPW59333.2023.00017","DOIUrl":"https://doi.org/10.1109/SPW59333.2023.00017","url":null,"abstract":"Complex data formats implicitly demand complex logic to parse and apprehend them. The Portable Document Format (PDF) is among the most demanding formats because it is used as both a data exchange and presentation format, and it has a particularly stringent tradition of supporting in-teroperability and consistent presentation. These requirements create complexity that presents an opportunity for adversaries to encode a variety of exploits and attacks. To investigate whether there is an association between structural malforms and malice (using PDF files as the example challenge format), we built PolyDoc, a tool that conducts format-aware tracing of files pulled from the PolySwarm network. The PolySwarm network crowdsources threat intelligence by running files through several industry-scale threat-detection engines. The PolySwarm network provides a PolyScore, which indicates whether a file is safe or malicious, as judged by threat-detection engines. We ran PolyDoc in a live hunt mode to gather PDF files submitted to PolySwarm and then trace the execution of these PDF files through popular PDF tools such as Mutool, Poppler, and Caradoc. We collected and analyzed 58,906 files from PolySwarm. Further, we used the PDF Error Ontology to assign error categories based on tracer output and compared them to the PolyScore. Our work demonstrates three core insights. First, PDF files classified as malicious contain syntactic malformations. Second, “uncategorized” error ontology classes were common across our different PDF tools—demonstrating that the PDF Error Ontology may be underspecified for files that real-world threat engines receive. Finally, attackers leverage specific syntactic malformations in attacks: malformations that current PDF tools can detect.","PeriodicalId":308378,"journal":{"name":"2023 IEEE Security and Privacy Workshops (SPW)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115086120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On the Brittleness of Robust Features: An Exploratory Analysis of Model Robustness and Illusionary Robust Features 鲁棒特征的脆弱性:模型鲁棒性和虚幻鲁棒性的探索性分析
Pub Date : 2023-05-01 DOI: 10.1109/SPW59333.2023.00009
Alireza Aghabagherloo, Rafa Gálvez, D. Preuveneers, B. Preneel
Neural networks have been shown to be vulnerable to visual data perturbations imperceptible to the human eye. Nowadays, the leading hypothesis about the reason for the existence of these adversarial examples is the presence of non-robust features, which are highly predictive but brittle. Also, it has been shown that there exist two types of non-robust features depending on whether or not they are entangled with robust features; perturbing non-robust features entangled with robust features can form adversarial examples. This paper extends earlier work by showing that models trained exclusively on robust features are still vulnerable to one type of adversarial example. Standard-trained networks can classify more accurately than robustly trained networks in this situation. Our experiments show that this phenomenon is due to the high correlation between most of the robust features and both correct and incorrect labels. In this work, we define features highly correlated with correct and incorrect labels as illusionary robust features. We discuss how perturbing an image attacking robust models affects the feature space. Based on our observations on the feature space, we explain why standard models are more successful in correctly classifying these perturbed images than robustly trained models. Our observations also show that, similar to changing non-robust features, changing some of the robust features is still imperceptible to the human eye.
神经网络已被证明易受人眼无法察觉的视觉数据扰动的影响。目前,关于这些对抗性示例存在的原因的主要假设是存在非鲁棒特征,这些特征具有高度预测性但很脆弱。此外,还表明存在两种类型的非鲁棒特征,这取决于它们是否与鲁棒特征纠缠;扰动的非鲁棒特征与鲁棒特征纠缠在一起可以形成对抗的例子。本文扩展了早期的工作,表明仅在鲁棒特征上训练的模型仍然容易受到一种类型的对抗性示例的影响。在这种情况下,标准训练的网络可以比鲁棒训练的网络更准确地进行分类。我们的实验表明,这种现象是由于大多数鲁棒特征与正确和错误标签之间的高度相关性。在这项工作中,我们将与正确和错误标签高度相关的特征定义为虚幻的鲁棒特征。我们讨论如何干扰攻击鲁棒模型的图像影响特征空间。基于我们对特征空间的观察,我们解释了为什么标准模型在正确分类这些扰动图像方面比鲁棒训练的模型更成功。我们的观察还表明,与改变非鲁棒特征类似,改变一些鲁棒特征仍然是人眼无法察觉的。
{"title":"On the Brittleness of Robust Features: An Exploratory Analysis of Model Robustness and Illusionary Robust Features","authors":"Alireza Aghabagherloo, Rafa Gálvez, D. Preuveneers, B. Preneel","doi":"10.1109/SPW59333.2023.00009","DOIUrl":"https://doi.org/10.1109/SPW59333.2023.00009","url":null,"abstract":"Neural networks have been shown to be vulnerable to visual data perturbations imperceptible to the human eye. Nowadays, the leading hypothesis about the reason for the existence of these adversarial examples is the presence of non-robust features, which are highly predictive but brittle. Also, it has been shown that there exist two types of non-robust features depending on whether or not they are entangled with robust features; perturbing non-robust features entangled with robust features can form adversarial examples. This paper extends earlier work by showing that models trained exclusively on robust features are still vulnerable to one type of adversarial example. Standard-trained networks can classify more accurately than robustly trained networks in this situation. Our experiments show that this phenomenon is due to the high correlation between most of the robust features and both correct and incorrect labels. In this work, we define features highly correlated with correct and incorrect labels as illusionary robust features. We discuss how perturbing an image attacking robust models affects the feature space. Based on our observations on the feature space, we explain why standard models are more successful in correctly classifying these perturbed images than robustly trained models. Our observations also show that, similar to changing non-robust features, changing some of the robust features is still imperceptible to the human eye.","PeriodicalId":308378,"journal":{"name":"2023 IEEE Security and Privacy Workshops (SPW)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128175042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ASanity: On Bug Shadowing by Early ASan Exits ASanity:早期ASan退出的Bug跟踪
Pub Date : 2023-05-01 DOI: 10.1109/SPW59333.2023.00037
V. Ulitzsch, Deniz Scholz, D. Maier
Bugs in memory-unsafe languages are a major source of critical vulnerabilities. Large-scale fuzzing campaigns, such as Google's OSS-Fuzz, can help find and fix these bugs. To find bugs faster during fuzzing, as well as to cluster and triage the bugs more easily in an automated setup, the targets are compiled with a set of sanitizers enabled, checking certain conditions at runtime. The most common sanitizer, ASan, reports common bug patterns found during a fuzzing campaign, such as out-of-bounds reads and writes or use-after-free bugs, and aborts the program early. The information also contains the type of bug the sanitizer found. During triage, out-of-bounds reads are often considered less critical than other bugs, namely out-of-bounds writes and use-after-free bugs. However, in this paper we show that these more severe vulnerabilities can remain undetected in ASan, shadowed by an earlier faulty read access. To prove this claim empirically, we conduct a large-scale study on 814 out-of-bounds read bugs reported by OSS-Fuzz. By rerunning the same testcases, but disabling ASan's early exits, we show that almost five percent of test cases lead to more critical violations later in the execution. Further, we pick the real-world target wasm3, and show how the reported out-of-bounds read covered up an exploitable out-of-bounds write, that got silently patched.
内存不安全语言中的错误是严重漏洞的主要来源。大规模的模糊测试活动,比如谷歌的OSS-Fuzz,可以帮助找到并修复这些漏洞。为了在模糊测试过程中更快地找到错误,以及在自动化设置中更容易地对错误进行集群和分类,在编译目标时启用了一组杀毒程序,在运行时检查某些条件。最常见的杀毒器ASan会报告在模糊测试活动中发现的常见错误模式,例如越界读写或free后使用错误,并提前终止程序。该信息还包含杀菌剂发现的bug类型。在分类过程中,越界读通常被认为比其他错误(即越界写和释放后使用错误)不那么重要。然而,在本文中,我们展示了这些更严重的漏洞可能在ASan中未被检测到,被早期错误的读访问所掩盖。为了从经验上证明这一说法,我们对OSS-Fuzz报告的814个越界读取错误进行了大规模研究。通过重新运行相同的测试用例,但是禁用ASan的早期退出,我们发现几乎有5%的测试用例在执行的后期导致了更严重的违规。此外,我们选择现实世界的目标was3,并展示报告的越界读取如何掩盖可利用的越界写入,该越界写入被悄悄地修补。
{"title":"ASanity: On Bug Shadowing by Early ASan Exits","authors":"V. Ulitzsch, Deniz Scholz, D. Maier","doi":"10.1109/SPW59333.2023.00037","DOIUrl":"https://doi.org/10.1109/SPW59333.2023.00037","url":null,"abstract":"Bugs in memory-unsafe languages are a major source of critical vulnerabilities. Large-scale fuzzing campaigns, such as Google's OSS-Fuzz, can help find and fix these bugs. To find bugs faster during fuzzing, as well as to cluster and triage the bugs more easily in an automated setup, the targets are compiled with a set of sanitizers enabled, checking certain conditions at runtime. The most common sanitizer, ASan, reports common bug patterns found during a fuzzing campaign, such as out-of-bounds reads and writes or use-after-free bugs, and aborts the program early. The information also contains the type of bug the sanitizer found. During triage, out-of-bounds reads are often considered less critical than other bugs, namely out-of-bounds writes and use-after-free bugs. However, in this paper we show that these more severe vulnerabilities can remain undetected in ASan, shadowed by an earlier faulty read access. To prove this claim empirically, we conduct a large-scale study on 814 out-of-bounds read bugs reported by OSS-Fuzz. By rerunning the same testcases, but disabling ASan's early exits, we show that almost five percent of test cases lead to more critical violations later in the execution. Further, we pick the real-world target wasm3, and show how the reported out-of-bounds read covered up an exploitable out-of-bounds write, that got silently patched.","PeriodicalId":308378,"journal":{"name":"2023 IEEE Security and Privacy Workshops (SPW)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122616909","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hakuin: Optimizing Blind SQL Injection with Probabilistic Language Models Hakuin:用概率语言模型优化SQL盲注入
Pub Date : 2023-05-01 DOI: 10.1109/SPW59333.2023.00039
Jakub Pruzinec, Quynh Anh Nguyen
SQL Injection (SQLI) is a pervasive web attack where a malicious input is used to dynamically build SQL queries in a way that tricks the database (DB) engine into performing unintended harmful operations. Among many potential exploitations, an attacker may opt to exfiltrate the application data. The exfiltration process is straightforward when the web application responds to injected queries with their results. In case the content is not exposed, the adversary can still deduce it using Blind SQLI (BSQLI), an inference technique based on response differences or time delays. Unfortunately, a common drawback of BSQLI is its low inference rate (one bit per request), which severely limits the volume of data that can be extracted this way. To address this limitation, the state-of-the-art BSQLI tools optimize the inference of textual data with binary search. However, this approach has two major limitations: it assumes a uniform distribution of characters and does not take into account the history of previously inferred characters. Consequently, the technique is inefficient for natural languages used ubiquitously in DBs. This paper presents Hakuin - a new framework for optimizing BSQLI with probabilistic language models. Hakuin employs domain-specific pre-trained and adaptive models to predict the next characters based on the inference history and prioritizes characters with a higher probability of being the right ones. It also tracks statistical information to opportunistically guess strings as a whole instead of inferring the characters separately. We benchmark Hakuin against 3 state-of-the-art BSQLI tools using 20 industry-standard DB schemas and a generic DB. The results show that Hakuin is about 6 times more efficient in inferring schemas, up to 3.2 times more efficient with generic data, and up to 26 times more efficient on columns with limited values compared to the second-best performing tool. To the best of our knowledge, Hakuin is the first solution that combines domain-specific pre-trained and adaptive language models to optimize BSQLI. We release its full source code, datasets, and language models to facilitate further research.
SQL注入(SQL Injection, SQLI)是一种普遍存在的web攻击,在这种攻击中,恶意输入被用来动态构建SQL查询,从而欺骗数据库(DB)引擎执行意外的有害操作。在许多潜在的利用中,攻击者可能会选择泄漏应用程序数据。当web应用程序用其结果响应注入的查询时,泄漏过程很简单。如果内容没有公开,攻击者仍然可以使用盲SQLI (BSQLI)推断出内容,盲SQLI是一种基于响应差异或时间延迟的推断技术。不幸的是,BSQLI的一个常见缺点是它的推断率很低(每个请求一个比特),这严重限制了可以通过这种方式提取的数据量。为了解决这个限制,最先进的BSQLI工具使用二进制搜索优化了文本数据的推理。然而,这种方法有两个主要的局限性:它假设字符的均匀分布,并且没有考虑先前推断的字符的历史。因此,该技术对于db中普遍使用的自然语言是低效的。本文提出了基于概率语言模型的BSQLI优化框架Hakuin。Hakuin使用特定领域的预训练和自适应模型,根据推理历史预测下一个字符,并优先考虑概率较高的字符。它还跟踪统计信息,以机会主义地猜测字符串作为一个整体,而不是单独推断字符。我们使用20个行业标准数据库模式和一个通用数据库,对Hakuin和3个最先进的BSQLI工具进行了基准测试。结果表明,与性能第二好的工具相比,Hakuin在推断模式方面的效率提高了6倍,在处理通用数据方面的效率提高了3.2倍,在处理具有有限值的列方面的效率提高了26倍。据我们所知,Hakuin是第一个结合了特定领域预训练和自适应语言模型来优化BSQLI的解决方案。我们发布了完整的源代码、数据集和语言模型,以促进进一步的研究。
{"title":"Hakuin: Optimizing Blind SQL Injection with Probabilistic Language Models","authors":"Jakub Pruzinec, Quynh Anh Nguyen","doi":"10.1109/SPW59333.2023.00039","DOIUrl":"https://doi.org/10.1109/SPW59333.2023.00039","url":null,"abstract":"SQL Injection (SQLI) is a pervasive web attack where a malicious input is used to dynamically build SQL queries in a way that tricks the database (DB) engine into performing unintended harmful operations. Among many potential exploitations, an attacker may opt to exfiltrate the application data. The exfiltration process is straightforward when the web application responds to injected queries with their results. In case the content is not exposed, the adversary can still deduce it using Blind SQLI (BSQLI), an inference technique based on response differences or time delays. Unfortunately, a common drawback of BSQLI is its low inference rate (one bit per request), which severely limits the volume of data that can be extracted this way. To address this limitation, the state-of-the-art BSQLI tools optimize the inference of textual data with binary search. However, this approach has two major limitations: it assumes a uniform distribution of characters and does not take into account the history of previously inferred characters. Consequently, the technique is inefficient for natural languages used ubiquitously in DBs. This paper presents Hakuin - a new framework for optimizing BSQLI with probabilistic language models. Hakuin employs domain-specific pre-trained and adaptive models to predict the next characters based on the inference history and prioritizes characters with a higher probability of being the right ones. It also tracks statistical information to opportunistically guess strings as a whole instead of inferring the characters separately. We benchmark Hakuin against 3 state-of-the-art BSQLI tools using 20 industry-standard DB schemas and a generic DB. The results show that Hakuin is about 6 times more efficient in inferring schemas, up to 3.2 times more efficient with generic data, and up to 26 times more efficient on columns with limited values compared to the second-best performing tool. To the best of our knowledge, Hakuin is the first solution that combines domain-specific pre-trained and adaptive language models to optimize BSQLI. We release its full source code, datasets, and language models to facilitate further research.","PeriodicalId":308378,"journal":{"name":"2023 IEEE Security and Privacy Workshops (SPW)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131217680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reflections on Trusting Docker: Invisible Malware in Continuous Integration Systems 关于信任Docker的思考:持续集成系统中不可见的恶意软件
Pub Date : 2023-05-01 DOI: 10.1109/SPW59333.2023.00025
Florent Moriconi, Axel Neergaard, Lucas Georget, Samuel Aubertin, Aurélien Francillon
Continuous integration (CI) is a widely adopted methodology for supporting software development. It provides automated generation of artifacts (e.g., binaries, container images) which are then deployed in production. However, to which extent should you trust the generated artifacts even if the source code is clean of malicious code? Revisiting the famous compiler backdoor from Ken Thompson, we show that a container-based CI system can be compromised without leaving any trace in the source code. Therefore, detecting such malware is challenging or even impossible with common practices such as peer review or static code analysis. We detail multiple ways to do the initial infection process. Then, we show how to persist during CI system updates, allowing long-term compromise. We detail possible malicious attack payloads such as sensitive data extraction or backdooring production software. We show that infected CI systems can be remotely controlled using covert channels to update attack payload or adapt malware to mitigation strategies. Finally, we propose a proof of concept implementation tested on GitLab CI and applicable to major CI providers.
持续集成(CI)是一种被广泛采用的支持软件开发的方法。它提供了工件(例如,二进制文件、容器映像)的自动生成,然后将其部署到生产环境中。然而,即使源代码没有恶意代码,您应该在多大程度上信任生成的工件?回顾Ken Thompson著名的编译器后门,我们展示了基于容器的CI系统可以在源代码中不留下任何痕迹的情况下被破坏。因此,检测这样的恶意软件是具有挑战性的,甚至不可能与同行评审或静态代码分析等常见做法。我们详细介绍了进行初始感染过程的多种方法。然后,我们将展示如何在CI系统更新期间持久化,从而允许长期折衷。我们详细介绍了可能的恶意攻击有效载荷,如敏感数据提取或后门生产软件。我们表明,受感染的CI系统可以使用隐蔽通道进行远程控制,以更新攻击有效载荷或使恶意软件适应缓解策略。最后,我们提出了一个在GitLab CI上测试的概念验证实现,适用于主要的CI提供商。
{"title":"Reflections on Trusting Docker: Invisible Malware in Continuous Integration Systems","authors":"Florent Moriconi, Axel Neergaard, Lucas Georget, Samuel Aubertin, Aurélien Francillon","doi":"10.1109/SPW59333.2023.00025","DOIUrl":"https://doi.org/10.1109/SPW59333.2023.00025","url":null,"abstract":"Continuous integration (CI) is a widely adopted methodology for supporting software development. It provides automated generation of artifacts (e.g., binaries, container images) which are then deployed in production. However, to which extent should you trust the generated artifacts even if the source code is clean of malicious code? Revisiting the famous compiler backdoor from Ken Thompson, we show that a container-based CI system can be compromised without leaving any trace in the source code. Therefore, detecting such malware is challenging or even impossible with common practices such as peer review or static code analysis. We detail multiple ways to do the initial infection process. Then, we show how to persist during CI system updates, allowing long-term compromise. We detail possible malicious attack payloads such as sensitive data extraction or backdooring production software. We show that infected CI systems can be remotely controlled using covert channels to update attack payload or adapt malware to mitigation strategies. Finally, we propose a proof of concept implementation tested on GitLab CI and applicable to major CI providers.","PeriodicalId":308378,"journal":{"name":"2023 IEEE Security and Privacy Workshops (SPW)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121565685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
HoneyKube: Designing and Deploying a Microservices-based Web Honeypot HoneyKube:设计和部署一个基于微服务的Web蜜罐
Pub Date : 2023-05-01 DOI: 10.1109/SPW59333.2023.00005
Chakshu Gupta, T. V. Ede, Andrea Continella
Over the past few years, we have witnessed a radical change in the architectures and infrastructures of web applications. Traditional monolithic systems are nowadays getting replaced by microservices-based architectures, which have become the natural choice for web application development due to portability, scalability, and ease of deployment. At the same time, due to its popularity, this architecture is now the target of specific cyberattacks. In the past, honeypots have been demonstrated to be valuable tools for collecting real-world attack data and understanding the methods that attackers adopt. However, to the best of our knowledge, there are no existing honeypots based on microservices architectures, which introduce new and different characteristics in the infrastructure. In this paper, we propose HoneyKube, a novel honeypot design that employs the microservices architecture for a web application. To address the challenges introduced by the highly dynamic nature of this architecture, we design an effective and scalable monitoring system that builds on top of the well-known Kubernetes orchestrator. We deploy our honeypot and collect approximately 850 GB of network and system data through our experiments. We also evaluate the fingerprintability of HoneyKube using a state-of-the-art reconnaissance tool. We will release our data and source code to facilitate more research in this field.
在过去的几年里,我们见证了web应用程序的架构和基础设施发生了根本性的变化。传统的单片系统现在正被基于微服务的架构所取代,由于可移植性、可扩展性和易于部署,微服务已经成为web应用程序开发的自然选择。与此同时,由于其受欢迎程度,这种架构现在成为特定网络攻击的目标。过去,蜜罐已被证明是收集真实世界攻击数据和了解攻击者采用的方法的有价值的工具。然而,据我们所知,目前还没有基于微服务架构的蜜罐,这会在基础设施中引入新的和不同的特征。在本文中,我们提出了HoneyKube,一种采用微服务架构的web应用程序的新型蜜罐设计。为了解决这个架构的高度动态性带来的挑战,我们设计了一个有效的、可扩展的监控系统,它建立在著名的Kubernetes编排器之上。我们部署了蜜罐,并通过实验收集了大约850 GB的网络和系统数据。我们还使用最先进的侦察工具评估HoneyKube的指纹识别能力。我们将公布我们的数据和源代码,以促进在这一领域的更多研究。
{"title":"HoneyKube: Designing and Deploying a Microservices-based Web Honeypot","authors":"Chakshu Gupta, T. V. Ede, Andrea Continella","doi":"10.1109/SPW59333.2023.00005","DOIUrl":"https://doi.org/10.1109/SPW59333.2023.00005","url":null,"abstract":"Over the past few years, we have witnessed a radical change in the architectures and infrastructures of web applications. Traditional monolithic systems are nowadays getting replaced by microservices-based architectures, which have become the natural choice for web application development due to portability, scalability, and ease of deployment. At the same time, due to its popularity, this architecture is now the target of specific cyberattacks. In the past, honeypots have been demonstrated to be valuable tools for collecting real-world attack data and understanding the methods that attackers adopt. However, to the best of our knowledge, there are no existing honeypots based on microservices architectures, which introduce new and different characteristics in the infrastructure. In this paper, we propose HoneyKube, a novel honeypot design that employs the microservices architecture for a web application. To address the challenges introduced by the highly dynamic nature of this architecture, we design an effective and scalable monitoring system that builds on top of the well-known Kubernetes orchestrator. We deploy our honeypot and collect approximately 850 GB of network and system data through our experiments. We also evaluate the fingerprintability of HoneyKube using a state-of-the-art reconnaissance tool. We will release our data and source code to facilitate more research in this field.","PeriodicalId":308378,"journal":{"name":"2023 IEEE Security and Privacy Workshops (SPW)","volume":"494 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127026400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On the Pitfalls of Security Evaluation of Robust Federated Learning 鲁棒联邦学习安全评估缺陷研究
Pub Date : 2023-05-01 DOI: 10.1109/SPW59333.2023.00011
Momin Ahmad Khan, Virat Shejwalkar, A. Houmansadr, F. Anwar
Prior literature has demonstrated that Federated learning (FL) is vulnerable to poisoning attacks that aim to jeopardize FL performance, and consequently, has introduced numerous defenses and demonstrated their robustness in various FL settings. In this work, we closely investigate a largely over-looked aspect in the robust FL literature, i.e., the experimental setup used to evaluate the robustness of FL poisoning defenses. We thoroughly review 50 defense works and highlight several questionable trends in the experimental setup of FL poisoning defense papers; we discuss the potential repercussions of such experimental setups on the key conclusions made by these works about the robustness of the proposed defenses. As a representative case study, we also evaluate a recent poisoning recovery paper from IEEE S&P'23, called FedRecover. Our case study demonstrates the importance of the experimental setup decisions (e.g., selecting representative and challenging datasets) in the validity of the robustness claims; For instance, while FedRecover performs well for MNIST and FashionMNIST (used in the original paper), in our experiments it performed poorly for FEMNIST and CIFAR10.
先前的文献表明,联邦学习(FL)容易受到旨在危害FL性能的中毒攻击,因此,引入了许多防御措施,并在各种FL设置中展示了它们的鲁棒性。在这项工作中,我们仔细研究了强大的FL文献中一个很大程度上被忽视的方面,即用于评估FL中毒防御的稳健性的实验设置。我们全面回顾了50篇答辩论文,并强调了FL中毒答辩论文实验设置中几个值得怀疑的趋势;我们讨论了这些实验设置对这些关于拟议防御的稳健性的关键结论的潜在影响。作为一个代表性的案例研究,我们还评估了IEEE标准普尔23年最近发表的一篇名为FedRecover的中毒恢复论文。我们的案例研究证明了实验设置决策(例如,选择具有代表性和挑战性的数据集)在稳健性声明有效性中的重要性;例如,虽然FedRecover对MNIST和FashionMNIST(在原始论文中使用)表现良好,但在我们的实验中,它对FEMNIST和CIFAR10表现不佳。
{"title":"On the Pitfalls of Security Evaluation of Robust Federated Learning","authors":"Momin Ahmad Khan, Virat Shejwalkar, A. Houmansadr, F. Anwar","doi":"10.1109/SPW59333.2023.00011","DOIUrl":"https://doi.org/10.1109/SPW59333.2023.00011","url":null,"abstract":"Prior literature has demonstrated that Federated learning (FL) is vulnerable to poisoning attacks that aim to jeopardize FL performance, and consequently, has introduced numerous defenses and demonstrated their robustness in various FL settings. In this work, we closely investigate a largely over-looked aspect in the robust FL literature, i.e., the experimental setup used to evaluate the robustness of FL poisoning defenses. We thoroughly review 50 defense works and highlight several questionable trends in the experimental setup of FL poisoning defense papers; we discuss the potential repercussions of such experimental setups on the key conclusions made by these works about the robustness of the proposed defenses. As a representative case study, we also evaluate a recent poisoning recovery paper from IEEE S&P'23, called FedRecover. Our case study demonstrates the importance of the experimental setup decisions (e.g., selecting representative and challenging datasets) in the validity of the robustness claims; For instance, while FedRecover performs well for MNIST and FashionMNIST (used in the original paper), in our experiments it performed poorly for FEMNIST and CIFAR10.","PeriodicalId":308378,"journal":{"name":"2023 IEEE Security and Privacy Workshops (SPW)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133942028","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
2023 IEEE Security and Privacy Workshops (SPW)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1