首页 > 最新文献

2019 IEEE Symposium on Security and Privacy (SP)最新文献

英文 中文
SoK: Sanitizing for Security SoK:安全消毒
Pub Date : 2018-06-12 DOI: 10.1109/SP.2019.00010
Dokyung Song, Julian Lettner, Prabhu Rajasekaran, Yeoul Na, Stijn Volckaert, Per Larsen, M. Franz
The C and C++ programming languages are notoriously insecure yet remain indispensable. Developers therefore resort to a multi-pronged approach to find security issues before adversaries. These include manual, static, and dynamic program analysis. Dynamic bug finding tools—henceforth "sanitizers"—can find bugs that elude other types of analysis because they observe the actual execution of a program, and can therefore directly observe incorrect program behavior as it happens. A vast number of sanitizers have been prototyped by academics and refined by practitioners. We provide a systematic overview of sanitizers with an emphasis on their role in finding security issues. Specifically, we taxonomize the available tools and the security vulnerabilities they cover, describe their performance and compatibility properties, and highlight various trade-offs.
众所周知,C和c++编程语言是不安全的,但仍然是不可或缺的。因此,开发人员采用多管齐下的方法在对手之前发现安全问题。这些包括手动、静态和动态程序分析。动态bug查找工具(因此称为“杀菌器”)可以发现其他类型的分析无法发现的bug,因为它们可以观察程序的实际执行情况,因此可以直接观察错误的程序行为。大量的杀菌剂已经由学术界和实践者进行了原型设计和改进。我们提供了杀菌剂的系统概述,重点是它们在发现安全问题方面的作用。具体来说,我们将对可用的工具和它们所涵盖的安全漏洞进行分类,描述它们的性能和兼容性属性,并强调各种权衡。
{"title":"SoK: Sanitizing for Security","authors":"Dokyung Song, Julian Lettner, Prabhu Rajasekaran, Yeoul Na, Stijn Volckaert, Per Larsen, M. Franz","doi":"10.1109/SP.2019.00010","DOIUrl":"https://doi.org/10.1109/SP.2019.00010","url":null,"abstract":"The C and C++ programming languages are notoriously insecure yet remain indispensable. Developers therefore resort to a multi-pronged approach to find security issues before adversaries. These include manual, static, and dynamic program analysis. Dynamic bug finding tools—henceforth \"sanitizers\"—can find bugs that elude other types of analysis because they observe the actual execution of a program, and can therefore directly observe incorrect program behavior as it happens. A vast number of sanitizers have been prototyped by academics and refined by practitioners. We provide a systematic overview of sanitizers with an emphasis on their role in finding security issues. Specifically, we taxonomize the available tools and the security vulnerabilities they cover, describe their performance and compatibility properties, and highlight various trade-offs.","PeriodicalId":272713,"journal":{"name":"2019 IEEE Symposium on Security and Privacy (SP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128630226","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 121
Exploiting Unintended Feature Leakage in Collaborative Learning 利用协作学习中的非预期特征泄漏
Pub Date : 2018-05-10 DOI: 10.1109/SP.2019.00029
Luca Melis, Congzheng Song, Emiliano De Cristofaro, Vitaly Shmatikov
Collaborative machine learning and related techniques such as federated learning allow multiple participants, each with his own training dataset, to build a joint model by training locally and periodically exchanging model updates. We demonstrate that these updates leak unintended information about participants' training data and develop passive and active inference attacks to exploit this leakage. First, we show that an adversarial participant can infer the presence of exact data points -- for example, specific locations -- in others' training data (i.e., membership inference). Then, we show how this adversary can infer properties that hold only for a subset of the training data and are independent of the properties that the joint model aims to capture. For example, he can infer when a specific person first appears in the photos used to train a binary gender classifier. We evaluate our attacks on a variety of tasks, datasets, and learning configurations, analyze their limitations, and discuss possible defenses.
协作机器学习和联邦学习等相关技术允许多个参与者(每个参与者都有自己的训练数据集)通过本地训练和定期交换模型更新来构建联合模型。我们证明了这些更新泄露了参与者训练数据的意外信息,并开发了被动和主动推理攻击来利用这种泄漏。首先,我们展示了一个敌对的参与者可以推断出其他人的训练数据中精确数据点的存在——例如,特定的位置(即成员推理)。然后,我们展示了这个对手如何推断出仅适用于训练数据子集的属性,并且独立于联合模型旨在捕获的属性。例如,他可以推断出一个特定的人第一次出现在用于训练二元性别分类器的照片中的时间。我们在各种任务、数据集和学习配置上评估我们的攻击,分析它们的局限性,并讨论可能的防御措施。
{"title":"Exploiting Unintended Feature Leakage in Collaborative Learning","authors":"Luca Melis, Congzheng Song, Emiliano De Cristofaro, Vitaly Shmatikov","doi":"10.1109/SP.2019.00029","DOIUrl":"https://doi.org/10.1109/SP.2019.00029","url":null,"abstract":"Collaborative machine learning and related techniques such as federated learning allow multiple participants, each with his own training dataset, to build a joint model by training locally and periodically exchanging model updates. We demonstrate that these updates leak unintended information about participants' training data and develop passive and active inference attacks to exploit this leakage. First, we show that an adversarial participant can infer the presence of exact data points -- for example, specific locations -- in others' training data (i.e., membership inference). Then, we show how this adversary can infer properties that hold only for a subset of the training data and are independent of the properties that the joint model aims to capture. For example, he can infer when a specific person first appears in the photos used to train a binary gender classifier. We evaluate our attacks on a variety of tasks, datasets, and learning configurations, analyze their limitations, and discuss possible defenses.","PeriodicalId":272713,"journal":{"name":"2019 IEEE Symposium on Security and Privacy (SP)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132676543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1054
Certified Robustness to Adversarial Examples with Differential Privacy 差分隐私对抗实例的认证鲁棒性
Pub Date : 2018-02-09 DOI: 10.1109/SP.2019.00044
Mathias Lécuyer, Vaggelis Atlidakis, Roxana Geambasu, Daniel J. Hsu, S. Jana
Adversarial examples that fool machine learning models, particularly deep neural networks, have been a topic of intense research interest, with attacks and defenses being developed in a tight back-and-forth. Most past defenses are best effort and have been shown to be vulnerable to sophisticated attacks. Recently a set of certified defenses have been introduced, which provide guarantees of robustness to norm-bounded attacks. However these defenses either do not scale to large datasets or are limited in the types of models they can support. This paper presents the first certified defense that both scales to large networks and datasets (such as Google’s Inception network for ImageNet) and applies broadly to arbitrary model types. Our defense, called PixelDP, is based on a novel connection between robustness against adversarial examples and differential privacy, a cryptographically-inspired privacy formalism, that provides a rigorous, generic, and flexible foundation for defense.
欺骗机器学习模型的对抗性示例,特别是深度神经网络,一直是一个备受关注的研究课题,攻击和防御都是在紧密的反复中开发的。大多数过去的防御都是尽力而为的,并且已经被证明容易受到复杂的攻击。最近引入了一组经过认证的防御,这些防御提供了对规范界攻击的鲁棒性保证。然而,这些防御要么不能扩展到大型数据集,要么受限于它们所能支持的模型类型。本文提出了第一个经过认证的防御方法,既可以扩展到大型网络和数据集(例如Google的ImageNet的Inception网络),也可以广泛应用于任意模型类型。我们的防御,称为PixelDP,是基于对对抗性示例的鲁棒性和差分隐私之间的新联系,差分隐私是一种密码学启发的隐私形式,为防御提供了严格,通用和灵活的基础。
{"title":"Certified Robustness to Adversarial Examples with Differential Privacy","authors":"Mathias Lécuyer, Vaggelis Atlidakis, Roxana Geambasu, Daniel J. Hsu, S. Jana","doi":"10.1109/SP.2019.00044","DOIUrl":"https://doi.org/10.1109/SP.2019.00044","url":null,"abstract":"Adversarial examples that fool machine learning models, particularly deep neural networks, have been a topic of intense research interest, with attacks and defenses being developed in a tight back-and-forth. Most past defenses are best effort and have been shown to be vulnerable to sophisticated attacks. Recently a set of certified defenses have been introduced, which provide guarantees of robustness to norm-bounded attacks. However these defenses either do not scale to large datasets or are limited in the types of models they can support. This paper presents the first certified defense that both scales to large networks and datasets (such as Google’s Inception network for ImageNet) and applies broadly to arbitrary model types. Our defense, called PixelDP, is based on a novel connection between robustness against adversarial examples and differential privacy, a cryptographically-inspired privacy formalism, that provides a rigorous, generic, and flexible foundation for defense.","PeriodicalId":272713,"journal":{"name":"2019 IEEE Symposium on Security and Privacy (SP)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128073467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 766
Spectre Attacks: Exploiting Speculative Execution 幽灵攻击:利用投机执行
Pub Date : 2018-01-03 DOI: 10.1145/3399742
P. Kocher, Daniel Genkin, D. Gruss, Werner Haas, Michael Hamburg, Moritz Lipp, S. Mangard, Thomas Prescher, Michael Schwarz, Y. Yarom
Modern processors use branch prediction and speculative execution to maximize performance. For example, if the destination of a branch depends on a memory value that is in the process of being read, CPUs will try to guess the destination and attempt to execute ahead. When the memory value finally arrives, the CPU either discards or commits the speculative computation. Speculative logic is unfaithful in how it executes, can access the victim's memory and registers, and can perform operations with measurable side effects. Spectre attacks involve inducing a victim to speculatively perform operations that would not occur during correct program execution and which leak the victim's confidential information via a side channel to the adversary. This paper describes practical attacks that combine methodology from side channel attacks, fault attacks, and return-oriented programming that can read arbitrary memory from the victim's process. More broadly, the paper shows that speculative execution implementations violate the security assumptions underpinning numerous software security mechanisms, including operating system process separation, containerization, just-in-time (JIT) compilation, and countermeasures to cache timing and side-channel attacks. These attacks represent a serious threat to actual systems since vulnerable speculative execution capabilities are found in microprocessors from Intel, AMD, and ARM that are used in billions of devices. While makeshift processor-specific countermeasures are possible in some cases, sound solutions will require fixes to processor designs as well as updates to instruction set architectures (ISAs) to give hardware architects and software developers a common understanding as to what computation state CPU implementations are (and are not) permitted to leak.
现代处理器使用分支预测和推测执行来最大化性能。例如,如果分支的目的地依赖于正在读取过程中的内存值,cpu将尝试猜测目的地并尝试提前执行。当内存值最终到达时,CPU要么放弃,要么提交推测计算。思测逻辑的执行方式不可靠,可以访问受害者的内存和寄存器,并且可以执行具有可测量副作用的操作。幽灵攻击包括诱导受害者推测执行在正确程序执行期间不会发生的操作,并通过侧通道将受害者的机密信息泄露给对手。本文描述了结合了侧信道攻击、故障攻击和面向返回的编程方法的实际攻击,这些方法可以从受害者的进程中读取任意内存。更广泛地说,本文表明推测执行实现违反了支持许多软件安全机制的安全假设,包括操作系统进程分离、容器化、即时(JIT)编译以及缓存定时和侧信道攻击的对策。这些攻击对实际系统构成严重威胁,因为数十亿设备中使用的英特尔、AMD和ARM的微处理器中存在易受攻击的推测执行能力。虽然在某些情况下,临时的特定于处理器的对策是可能的,但合理的解决方案将需要修复处理器设计以及更新指令集体系结构(isa),以使硬件架构师和软件开发人员对CPU实现允许泄漏(和不允许泄漏)的计算状态有一个共同的理解。
{"title":"Spectre Attacks: Exploiting Speculative Execution","authors":"P. Kocher, Daniel Genkin, D. Gruss, Werner Haas, Michael Hamburg, Moritz Lipp, S. Mangard, Thomas Prescher, Michael Schwarz, Y. Yarom","doi":"10.1145/3399742","DOIUrl":"https://doi.org/10.1145/3399742","url":null,"abstract":"Modern processors use branch prediction and speculative execution to maximize performance. For example, if the destination of a branch depends on a memory value that is in the process of being read, CPUs will try to guess the destination and attempt to execute ahead. When the memory value finally arrives, the CPU either discards or commits the speculative computation. Speculative logic is unfaithful in how it executes, can access the victim's memory and registers, and can perform operations with measurable side effects. Spectre attacks involve inducing a victim to speculatively perform operations that would not occur during correct program execution and which leak the victim's confidential information via a side channel to the adversary. This paper describes practical attacks that combine methodology from side channel attacks, fault attacks, and return-oriented programming that can read arbitrary memory from the victim's process. More broadly, the paper shows that speculative execution implementations violate the security assumptions underpinning numerous software security mechanisms, including operating system process separation, containerization, just-in-time (JIT) compilation, and countermeasures to cache timing and side-channel attacks. These attacks represent a serious threat to actual systems since vulnerable speculative execution capabilities are found in microprocessors from Intel, AMD, and ARM that are used in billions of devices. While makeshift processor-specific countermeasures are possible in some cases, sound solutions will require fixes to processor designs as well as updates to instruction set architectures (ISAs) to give hardware architects and software developers a common understanding as to what computation state CPU implementations are (and are not) permitted to leak.","PeriodicalId":272713,"journal":{"name":"2019 IEEE Symposium on Security and Privacy (SP)","volume":"91 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128451462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1730
On the Security of Two-Round Multi-Signatures 两轮多重签名的安全性研究
Pub Date : 1900-01-01 DOI: 10.1109/SP.2019.00050
Manu Drijvers, Kasra Edalatnejad, B. Ford, Eike Kiltz, J. Loss, G. Neven, Igors Stepanovs
A multi-signature scheme allows a group of signers to collaboratively sign a message, creating a single signature that convinces a verifier that every individual signer approved the message. The increased interest in technologies to decentralize trust has triggered the proposal of highly efficient two-round Schnorr-based multi-signature schemes designed to scale up to thousands of signers, namely BCJ by Bagherzandi et al. (CCS 2008), MWLD by Ma et al. (DCC 2010), CoSi by Syta et al. (S&P 2016), and MuSig by Maxwell et al. (ePrint 2018). In this work, we point out serious security issues in all currently known two-round multi-signature schemes (without pairings). First, we prove that none of the schemes can be proved secure without radically departing from currently known techniques. Namely, we show that if the one-more discrete-logarithm problem is hard, then no algebraic reduction exists that proves any of these schemes secure under the discrete-logarithm or one-more discrete-logarithm problem. We point out subtle flaws in the published security proofs of the above schemes (except CoSi, which was not proved secure) to clarify the contradiction between our result and the existing proofs. Next, we describe practical sub-exponential attacks on all schemes, providing further evidence to their insecurity. Being left without two-round multi-signature schemes, we present mBCJ, a variant of the BCJ scheme that we prove secure under the discrete-logarithm assumption in the random-oracle model. Our experiments show that mBCJ barely affects scalability compared to CoSi, allowing 16384 signers to collaboratively sign a message in about 2 seconds, making it a highly practical and provably secure alternative for large-scale deployments.
多重签名方案允许一组签名者协作签署消息,创建单个签名,使验证者确信每个签名者都批准了该消息。人们对去中心化信任技术的兴趣日益增加,引发了基于schnorr的高效两轮多重签名方案的提出,该方案旨在扩展到数千名签署人,即Bagherzandi等人的BCJ (CCS 2008), Ma等人的MWLD (DCC 2010), Syta等人的CoSi (S&P 2016)和Maxwell等人的MuSig (ePrint 2018)。在这项工作中,我们指出了目前已知的所有两轮多重签名方案(无配对)中存在的严重安全问题。首先,我们证明,如果不从根本上脱离目前已知的技术,任何方案都不能被证明是安全的。也就是说,我们证明了如果一个多离散对数问题是困难的,那么不存在任何代数约简来证明这些方案在离散对数或一个多离散对数问题下是安全的。我们指出了上述方案已公布的安全证明中的细微缺陷(CoSi除外,它没有被证明是安全的),以澄清我们的结果与现有证明之间的矛盾。接下来,我们描述了对所有方案的实际次指数攻击,进一步证明了它们的不安全性。在没有两轮多重签名方案的情况下,我们提出了mBCJ方案,这是BCJ方案的一种变体,我们在随机oracle模型的离散对数假设下证明了它的安全性。我们的实验表明,与CoSi相比,mBCJ几乎不影响可扩展性,允许16384个签名者在大约2秒内协作签署消息,使其成为大规模部署的高度实用且可证明安全的替代方案。
{"title":"On the Security of Two-Round Multi-Signatures","authors":"Manu Drijvers, Kasra Edalatnejad, B. Ford, Eike Kiltz, J. Loss, G. Neven, Igors Stepanovs","doi":"10.1109/SP.2019.00050","DOIUrl":"https://doi.org/10.1109/SP.2019.00050","url":null,"abstract":"A multi-signature scheme allows a group of signers to collaboratively sign a message, creating a single signature that convinces a verifier that every individual signer approved the message. The increased interest in technologies to decentralize trust has triggered the proposal of highly efficient two-round Schnorr-based multi-signature schemes designed to scale up to thousands of signers, namely BCJ by Bagherzandi et al. (CCS 2008), MWLD by Ma et al. (DCC 2010), CoSi by Syta et al. (S&P 2016), and MuSig by Maxwell et al. (ePrint 2018). In this work, we point out serious security issues in all currently known two-round multi-signature schemes (without pairings). First, we prove that none of the schemes can be proved secure without radically departing from currently known techniques. Namely, we show that if the one-more discrete-logarithm problem is hard, then no algebraic reduction exists that proves any of these schemes secure under the discrete-logarithm or one-more discrete-logarithm problem. We point out subtle flaws in the published security proofs of the above schemes (except CoSi, which was not proved secure) to clarify the contradiction between our result and the existing proofs. Next, we describe practical sub-exponential attacks on all schemes, providing further evidence to their insecurity. Being left without two-round multi-signature schemes, we present mBCJ, a variant of the BCJ scheme that we prove secure under the discrete-logarithm assumption in the random-oracle model. Our experiments show that mBCJ barely affects scalability compared to CoSi, allowing 16384 signers to collaboratively sign a message in about 2 seconds, making it a highly practical and provably secure alternative for large-scale deployments.","PeriodicalId":272713,"journal":{"name":"2019 IEEE Symposium on Security and Privacy (SP)","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124550351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 83
期刊
2019 IEEE Symposium on Security and Privacy (SP)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1