首页 > 最新文献

Proceedings of the 11th ACM on Asia Conference on Computer and Communications Security最新文献

英文 中文
FSSR: Fine-Grained EHRs Sharing via Similarity-Based Recommendation in Cloud-Assisted eHealthcare System FSSR:云辅助电子医疗系统中基于相似度推荐的细粒度电子病历共享
Cheng Huang, R. Lu, Hui Zhu, Jun Shao, Xiaodong Lin
With the evolving of ehealthcare industry, electronic health records (EHRs), as one of the digital health records stored and managed by patients, have been regarded to provide more benefits. With the EHRs, patients can conveniently share health records with doctors and build up a complete picture of their health. However, due to the sensitivity of EHRs, how to guarantee the security and privacy of EHRs becomes one of the most important issues concerned by patients. To tackle these privacy challenges such as how to make a fine-grained access control on the shared EHRs, how to keep the confidentiality of EHRs stored in cloud, how to audit EHRs and how to find the suitable doctors for patients, in this paper, we propose a fine-grained EHRs sharing scheme via similarity-based recommendation accelerated by Locality Sensitive Hashing (LSH) in cloud-assisted ehealthcare system, called FSSR. Specifically, our proposed scheme allows patients to securely share their EHRs with some suitable doctors under fine-grained privacy access control. Detailed security analysis confirms its security prosperities. In addition, extensive simulations by developing a prototype of FSSR are also conducted, and the performance evaluations demonstrate the FSSR's effectiveness in terms of computational cost, storage and communication cost while minimizing the privacy disclosure.
随着电子医疗行业的发展,电子病历作为患者存储和管理的数字健康记录之一,被认为可以提供更多的好处。有了电子病历,病人可以方便地与医生分享健康记录,并建立一个完整的健康图景。然而,由于电子病历的敏感性,如何保证电子病历的安全性和隐私性成为患者最关心的问题之一。为了解决如何对共享的电子病历进行细粒度访问控制、如何保证存储在云中的电子病历的保密性、如何对电子病历进行审计以及如何为患者找到合适的医生等隐私挑战,本文提出了一种基于相似度推荐的细粒度电子病历共享方案,该方案在云辅助电子医疗系统中由位置敏感散列(Locality Sensitive hash, LSH)加速。具体来说,我们提出的方案允许患者在细粒度的隐私访问控制下安全地与一些合适的医生共享他们的电子病历。详细的安全分析证实了其安全繁荣。此外,通过开发FSSR原型进行了大量仿真,性能评估表明FSSR在计算成本、存储和通信成本方面的有效性,同时最大限度地减少了隐私泄露。
{"title":"FSSR: Fine-Grained EHRs Sharing via Similarity-Based Recommendation in Cloud-Assisted eHealthcare System","authors":"Cheng Huang, R. Lu, Hui Zhu, Jun Shao, Xiaodong Lin","doi":"10.1145/2897845.2897870","DOIUrl":"https://doi.org/10.1145/2897845.2897870","url":null,"abstract":"With the evolving of ehealthcare industry, electronic health records (EHRs), as one of the digital health records stored and managed by patients, have been regarded to provide more benefits. With the EHRs, patients can conveniently share health records with doctors and build up a complete picture of their health. However, due to the sensitivity of EHRs, how to guarantee the security and privacy of EHRs becomes one of the most important issues concerned by patients. To tackle these privacy challenges such as how to make a fine-grained access control on the shared EHRs, how to keep the confidentiality of EHRs stored in cloud, how to audit EHRs and how to find the suitable doctors for patients, in this paper, we propose a fine-grained EHRs sharing scheme via similarity-based recommendation accelerated by Locality Sensitive Hashing (LSH) in cloud-assisted ehealthcare system, called FSSR. Specifically, our proposed scheme allows patients to securely share their EHRs with some suitable doctors under fine-grained privacy access control. Detailed security analysis confirms its security prosperities. In addition, extensive simulations by developing a prototype of FSSR are also conducted, and the performance evaluations demonstrate the FSSR's effectiveness in terms of computational cost, storage and communication cost while minimizing the privacy disclosure.","PeriodicalId":166633,"journal":{"name":"Proceedings of the 11th ACM on Asia Conference on Computer and Communications Security","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131216310","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
Generally Hybrid Proxy Re-Encryption: A Secure Data Sharing among Cryptographic Clouds 一般混合代理重加密:加密云之间的安全数据共享
Peng Xu, Jun Xu, Wei Wang, Hai Jin, W. Susilo, Deqing Zou
Proxy Re-Encryption (PRE) is a favorable primitive to realize a cryptographic cloud with secure and flexible data sharing mechanism. A number of PRE schemes with versatile capabilities have been proposed for different applications. The secure data sharing can be internally achieved in each PRE scheme. But no previous work can guarantee the secure data sharing among different PRE schemes in a general manner. Moreover, it is challenging to solve this problem due to huge differences among the existing PRE schemes in their algebraic systems and public-key types. To solve this problem more generally, this paper uniforms the definitions of the existing PRE and Public Key Encryption (PKE) schemes, and further uniforms their security definitions. Then taking any uniformly defined PRE scheme and any uniformly defined PKE scheme as two building blocks, this paper constructs a Generally Hybrid Proxy Re-Encryption (GHPRE) scheme with the idea of temporary public and private keys to achieve secure data sharing between these two underlying schemes. Since PKE is a more general definition than PRE, the proposed GHPRE scheme also is workable between any two PRE schemes. Moreover, the proposed GHPRE scheme can be transparently deployed even if the underlying PRE schemes are implementing.
代理重加密(PRE)是实现具有安全、灵活的数据共享机制的加密云的有利原语。针对不同的应用,已经提出了许多具有多种功能的PRE方案。每个PRE方案都可以在内部实现安全的数据共享。但是,以往的工作并不能保证不同PRE方案之间数据共享的安全性。此外,由于现有的PRE方案在代数系统和公钥类型上存在巨大差异,这给解决这一问题带来了挑战。为了更普遍地解决这一问题,本文统一了现有PRE和PKE方案的定义,并进一步统一了它们的安全定义。然后以任意统一定义的PRE方案和任意统一定义的PKE方案为构建块,采用临时公钥和私钥的思想,构造了通用混合代理重加密(GHPRE)方案,实现了两种底层方案之间的安全数据共享。由于PKE是一个比PRE更通用的定义,因此所提出的GHPRE方案也可以在任意两个PRE方案之间工作。此外,即使底层PRE方案正在实施,所提议的GHPRE方案也可以透明地部署。
{"title":"Generally Hybrid Proxy Re-Encryption: A Secure Data Sharing among Cryptographic Clouds","authors":"Peng Xu, Jun Xu, Wei Wang, Hai Jin, W. Susilo, Deqing Zou","doi":"10.1145/2897845.2897923","DOIUrl":"https://doi.org/10.1145/2897845.2897923","url":null,"abstract":"Proxy Re-Encryption (PRE) is a favorable primitive to realize a cryptographic cloud with secure and flexible data sharing mechanism. A number of PRE schemes with versatile capabilities have been proposed for different applications. The secure data sharing can be internally achieved in each PRE scheme. But no previous work can guarantee the secure data sharing among different PRE schemes in a general manner. Moreover, it is challenging to solve this problem due to huge differences among the existing PRE schemes in their algebraic systems and public-key types. To solve this problem more generally, this paper uniforms the definitions of the existing PRE and Public Key Encryption (PKE) schemes, and further uniforms their security definitions. Then taking any uniformly defined PRE scheme and any uniformly defined PKE scheme as two building blocks, this paper constructs a Generally Hybrid Proxy Re-Encryption (GHPRE) scheme with the idea of temporary public and private keys to achieve secure data sharing between these two underlying schemes. Since PKE is a more general definition than PRE, the proposed GHPRE scheme also is workable between any two PRE schemes. Moreover, the proposed GHPRE scheme can be transparently deployed even if the underlying PRE schemes are implementing.","PeriodicalId":166633,"journal":{"name":"Proceedings of the 11th ACM on Asia Conference on Computer and Communications Security","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114281940","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Real-Time Detection of Malware Downloads via Large-Scale URL->File->Machine Graph Mining 通过大规模URL->文件->机器图挖掘实时检测恶意软件下载
Babak Rahbarinia, Marco Balduzzi, R. Perdisci
In this paper we propose Mastino, a novel defense system to detect malware download events. A download event is a 3-tuple that identifies the action of downloading a file from a URL that was triggered by a client (machine). Mastino utilizes global situation awareness and continuously monitors various network- and system-level events of the clients' machines across the Internet and provides real time classification of both files and URLs to the clients upon submission of a new, unknown file or URL to the system. To enable detection of the download events, Mastino builds a large download graph that captures the subtle relationships among the entities of download events, i.e. files, URLs, and machines. We implemented a prototype version of Mastino and evaluated it in a large-scale real-world deployment. Our experimental evaluation shows that Mastino can accurately classify malware download events with an average of 95.5% true positive (TP), while incurring less than 0.5% false positives (FP). In addition, we show the Mastino can classify a new download event as either benign or malware in just a fraction of a second, and is therefore suitable as a real time defense system.
本文提出了一种检测恶意软件下载事件的新型防御系统Mastino。下载事件是一个3元组,用于标识由客户机(机器)触发的从URL下载文件的操作。Mastino利用全局态势感知,并通过Internet持续监控客户端机器的各种网络和系统级事件,并在向系统提交新的未知文件或URL时向客户端提供文件和URL的实时分类。为了检测下载事件,Mastino构建了一个大型下载图,它捕获了下载事件实体(即文件、url和机器)之间的微妙关系。我们实现了Mastino的原型版本,并在大规模的实际部署中对其进行了评估。我们的实验评估表明,Mastino可以准确地对恶意软件下载事件进行分类,平均真阳性(TP)为95.5%,而假阳性(FP)低于0.5%。此外,我们还展示了Mastino可以在不到一秒的时间内将新的下载事件分类为良性或恶意软件,因此适合作为实时防御系统。
{"title":"Real-Time Detection of Malware Downloads via Large-Scale URL->File->Machine Graph Mining","authors":"Babak Rahbarinia, Marco Balduzzi, R. Perdisci","doi":"10.1145/2897845.2897918","DOIUrl":"https://doi.org/10.1145/2897845.2897918","url":null,"abstract":"In this paper we propose Mastino, a novel defense system to detect malware download events. A download event is a 3-tuple that identifies the action of downloading a file from a URL that was triggered by a client (machine). Mastino utilizes global situation awareness and continuously monitors various network- and system-level events of the clients' machines across the Internet and provides real time classification of both files and URLs to the clients upon submission of a new, unknown file or URL to the system. To enable detection of the download events, Mastino builds a large download graph that captures the subtle relationships among the entities of download events, i.e. files, URLs, and machines. We implemented a prototype version of Mastino and evaluated it in a large-scale real-world deployment. Our experimental evaluation shows that Mastino can accurately classify malware download events with an average of 95.5% true positive (TP), while incurring less than 0.5% false positives (FP). In addition, we show the Mastino can classify a new download event as either benign or malware in just a fraction of a second, and is therefore suitable as a real time defense system.","PeriodicalId":166633,"journal":{"name":"Proceedings of the 11th ACM on Asia Conference on Computer and Communications Security","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127854562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
Privacy-Preserving Spectral Analysis of Large Graphs in Public Clouds 公共云中大图的保密性谱分析
Sagar Sharma, James Powers, Keke Chen
Large graph datasets have become invaluable assets for studying problems in business applications and scientific research. These datasets, collected and owned by data owners, may also contain privacy-sensitive information. When using public clouds for elastic processing, data owners have to protect both data ownership and privacy from curious cloud providers. We propose a cloud-centric framework that allows data owners to efficiently collect graph data from the distributed data contributors, and privately store and analyze graph data in the cloud. Data owners can conduct expensive operations in untrusted public clouds with privacy and scalability preserved. The major contributions of this work include two privacy-preserving approximate eigen decomposition algorithms (the secure Lanczos and Nystrom methods) for spectral analysis of large graph matrices, and a personalized privacy-preserving data submission method based on differential privacy that allows for the trade-off between data sparsity and privacy. For a N-node graph, the proposed approach allows a data owner to finish the core operations with only O(N) client-side costs in computation, storage, and communication. The expensive O(N2) operations are performed in the cloud with the proposed privacy-preserving algorithms. We prove that our approach can satisfactorily preserve data privacy against the untrusted cloud providers. We have conducted an extensive experimental study to investigate these algorithms in terms of the intrinsic relationships among costs, privacy, scalability, and result quality.
大型图数据集已经成为在商业应用和科学研究中研究问题的宝贵资产。这些由数据所有者收集和拥有的数据集也可能包含隐私敏感信息。当使用公共云进行弹性处理时,数据所有者必须保护数据所有权和隐私,以免受到好奇的云提供商的攻击。我们提出了一个以云为中心的框架,允许数据所有者有效地从分布式数据贡献者那里收集图形数据,并在云中存储和分析图形数据。数据所有者可以在不受信任的公共云中进行昂贵的操作,同时保留隐私和可伸缩性。这项工作的主要贡献包括用于大型图矩阵谱分析的两种保护隐私的近似特征分解算法(安全的Lanczos和Nystrom方法),以及基于差分隐私的个性化保护隐私数据提交方法,该方法允许在数据稀疏性和隐私之间进行权衡。对于N个节点的图,建议的方法允许数据所有者在计算、存储和通信方面仅花费O(N)客户端成本来完成核心操作。使用所提出的隐私保护算法在云中执行昂贵的O(N2)操作。我们证明,我们的方法可以令人满意地保护数据隐私,防止不受信任的云提供商。我们进行了广泛的实验研究,从成本、隐私、可扩展性和结果质量之间的内在关系来研究这些算法。
{"title":"Privacy-Preserving Spectral Analysis of Large Graphs in Public Clouds","authors":"Sagar Sharma, James Powers, Keke Chen","doi":"10.1145/2897845.2897857","DOIUrl":"https://doi.org/10.1145/2897845.2897857","url":null,"abstract":"Large graph datasets have become invaluable assets for studying problems in business applications and scientific research. These datasets, collected and owned by data owners, may also contain privacy-sensitive information. When using public clouds for elastic processing, data owners have to protect both data ownership and privacy from curious cloud providers. We propose a cloud-centric framework that allows data owners to efficiently collect graph data from the distributed data contributors, and privately store and analyze graph data in the cloud. Data owners can conduct expensive operations in untrusted public clouds with privacy and scalability preserved. The major contributions of this work include two privacy-preserving approximate eigen decomposition algorithms (the secure Lanczos and Nystrom methods) for spectral analysis of large graph matrices, and a personalized privacy-preserving data submission method based on differential privacy that allows for the trade-off between data sparsity and privacy. For a N-node graph, the proposed approach allows a data owner to finish the core operations with only O(N) client-side costs in computation, storage, and communication. The expensive O(N2) operations are performed in the cloud with the proposed privacy-preserving algorithms. We prove that our approach can satisfactorily preserve data privacy against the untrusted cloud providers. We have conducted an extensive experimental study to investigate these algorithms in terms of the intrinsic relationships among costs, privacy, scalability, and result quality.","PeriodicalId":166633,"journal":{"name":"Proceedings of the 11th ACM on Asia Conference on Computer and Communications Security","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133960121","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Verifiable Outsourcing Algorithms for Modular Exponentiations with Improved Checkability 改进可校验性的模幂的可验证外包算法
Yanli Ren, Ning Ding, Xinpeng Zhang, Haining Lu, Dawu Gu
The problem of securely outsourcing computation has received widespread attention due to the development of cloud computing and mobile devices. In this paper, we first propose a secure verifiable outsourcing algorithm of single modular exponentiation based on the one-malicious model of two untrusted servers. The outsourcer could detect any failure with probability 1 if one of the servers misbehaves. We also present the other verifiable outsourcing algorithm for multiple modular exponentiations based on the same model. Compared with the state-of-the-art algorithms, the proposed algorithms improve both checkability and efficiency for the outsourcer. Finally, we utilize the proposed algorithms as two subroutines to achieve outsource-secure polynomial evaluation and ciphertext-policy attributed-based encryption (CP-ABE) scheme with verifiable outsourced encryption and decryption.
随着云计算和移动设备的发展,安全外包计算的问题受到了广泛关注。本文首先基于两台不可信服务器的单恶意模型,提出了一种安全可验证的单模块幂运算外包算法。如果其中一个服务器行为不正常,外包商可以以1的概率检测到任何故障。我们还提出了基于相同模型的另一种可验证的多模幂运算外包算法。与现有算法相比,本文提出的算法提高了外包商的可检查性和效率。最后,我们利用所提出的算法作为两个子程序来实现外包安全多项式评估和具有可验证外包加解密的密文策略属性加密(CP-ABE)方案。
{"title":"Verifiable Outsourcing Algorithms for Modular Exponentiations with Improved Checkability","authors":"Yanli Ren, Ning Ding, Xinpeng Zhang, Haining Lu, Dawu Gu","doi":"10.1145/2897845.2897881","DOIUrl":"https://doi.org/10.1145/2897845.2897881","url":null,"abstract":"The problem of securely outsourcing computation has received widespread attention due to the development of cloud computing and mobile devices. In this paper, we first propose a secure verifiable outsourcing algorithm of single modular exponentiation based on the one-malicious model of two untrusted servers. The outsourcer could detect any failure with probability 1 if one of the servers misbehaves. We also present the other verifiable outsourcing algorithm for multiple modular exponentiations based on the same model. Compared with the state-of-the-art algorithms, the proposed algorithms improve both checkability and efficiency for the outsourcer. Finally, we utilize the proposed algorithms as two subroutines to achieve outsource-secure polynomial evaluation and ciphertext-policy attributed-based encryption (CP-ABE) scheme with verifiable outsourced encryption and decryption.","PeriodicalId":166633,"journal":{"name":"Proceedings of the 11th ACM on Asia Conference on Computer and Communications Security","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115640598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
Discovering Malicious Domains through Passive DNS Data Graph Analysis 通过被动DNS数据图分析发现恶意域
Issa M. Khalil, Ting Yu, Bei Guan
Malicious domains are key components to a variety of cyber attacks. Several recent techniques are proposed to identify malicious domains through analysis of DNS data. The general approach is to build classifiers based on DNS-related local domain features. One potential problem is that many local features, e.g., domain name patterns and temporal patterns, tend to be not robust. Attackers could easily alter these features to evade detection without affecting much their attack capabilities. In this paper, we take a complementary approach. Instead of focusing on local features, we propose to discover and analyze global associations among domains. The key challenges are (1) to build meaningful associations among domains; and (2) to use these associations to reason about the potential maliciousness of domains. For the first challenge, we take advantage of the modus operandi of attackers. To avoid detection, malicious domains exhibit dynamic behavior by, for example, frequently changing the malicious domain-IP resolutions and creating new domains. This makes it very likely for attackers to reuse resources. It is indeed commonly observed that over a period of time multiple malicious domains are hosted on the same IPs and multiple IPs host the same malicious domains, which creates intrinsic association among them. For the second challenge, we develop a graph-based inference technique over associated domains. Our approach is based on the intuition that a domain having strong associations with known malicious domains is likely to be malicious. Carefully established associations enable the discovery of a large set of new malicious domains using a very small set of previously known malicious ones. Our experiments over a public passive DNS database show that the proposed technique can achieve high true positive rates (over 95%) while maintaining low false positive rates (less than 0.5%). Further, even with a small set of known malicious domains (a couple of hundreds), our technique can discover a large set of potential malicious domains (in the scale of up to tens of thousands).
恶意域名是各种网络攻击的关键组成部分。最近提出了几种通过分析DNS数据来识别恶意域的技术。一般的方法是基于dns相关的本地域特征构建分类器。一个潜在的问题是,许多局部特征,例如域名模式和时间模式,往往不是健壮的。攻击者可以很容易地改变这些特性以逃避检测,而不会影响他们的攻击能力。在本文中,我们采取了一种互补的方法。与其关注局部特征,我们建议发现和分析领域之间的全局关联。关键的挑战是:(1)在领域之间建立有意义的关联;(2)利用这些关联来推断域的潜在恶意。对于第一个挑战,我们利用攻击者的操作方式。为了避免被检测到,恶意域表现出动态行为,例如,频繁更改恶意域ip解析和创建新域。这使得攻击者很可能重用资源。在一段时间内,多个恶意域驻留在同一个ip上,多个ip驻留在同一个恶意域上,这就形成了它们之间的内在关联。对于第二个挑战,我们开发了一种基于图的关联域推理技术。我们的方法是基于这样一种直觉,即与已知恶意域有强烈关联的域很可能是恶意的。精心建立的关联可以使用非常小的已知恶意域集来发现大量新的恶意域。我们在公共被动DNS数据库上的实验表明,所提出的技术可以实现高真阳性率(超过95%),同时保持低假阳性率(小于0.5%)。此外,即使只有一小部分已知的恶意域(几百个),我们的技术也可以发现大量潜在的恶意域(多达数万个)。
{"title":"Discovering Malicious Domains through Passive DNS Data Graph Analysis","authors":"Issa M. Khalil, Ting Yu, Bei Guan","doi":"10.1145/2897845.2897877","DOIUrl":"https://doi.org/10.1145/2897845.2897877","url":null,"abstract":"Malicious domains are key components to a variety of cyber attacks. Several recent techniques are proposed to identify malicious domains through analysis of DNS data. The general approach is to build classifiers based on DNS-related local domain features. One potential problem is that many local features, e.g., domain name patterns and temporal patterns, tend to be not robust. Attackers could easily alter these features to evade detection without affecting much their attack capabilities. In this paper, we take a complementary approach. Instead of focusing on local features, we propose to discover and analyze global associations among domains. The key challenges are (1) to build meaningful associations among domains; and (2) to use these associations to reason about the potential maliciousness of domains. For the first challenge, we take advantage of the modus operandi of attackers. To avoid detection, malicious domains exhibit dynamic behavior by, for example, frequently changing the malicious domain-IP resolutions and creating new domains. This makes it very likely for attackers to reuse resources. It is indeed commonly observed that over a period of time multiple malicious domains are hosted on the same IPs and multiple IPs host the same malicious domains, which creates intrinsic association among them. For the second challenge, we develop a graph-based inference technique over associated domains. Our approach is based on the intuition that a domain having strong associations with known malicious domains is likely to be malicious. Carefully established associations enable the discovery of a large set of new malicious domains using a very small set of previously known malicious ones. Our experiments over a public passive DNS database show that the proposed technique can achieve high true positive rates (over 95%) while maintaining low false positive rates (less than 0.5%). Further, even with a small set of known malicious domains (a couple of hundreds), our technique can discover a large set of potential malicious domains (in the scale of up to tens of thousands).","PeriodicalId":166633,"journal":{"name":"Proceedings of the 11th ACM on Asia Conference on Computer and Communications Security","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116513369","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 79
Credential Wrapping: From Anonymous Password Authentication to Anonymous Biometric Authentication 凭证包装:从匿名密码认证到匿名生物识别认证
Yanjiang Yang, Haibing Lu, Joseph K. Liu, J. Weng, Youcheng Zhang, Jianying Zhou
The anonymous password authentication scheme proposed in ACSAC'10 under an unorthodox approach of password wrapped credentials advanced anonymous password authentication to be a practically ready primitive, and it is being standardized. In this paper, we improve on that scheme by proposing a new method of "public key suppression" for achieving server-designated credential verifiability, a core technicality in materializing the concept of password wrapped credential. Besides better performance, our new method simplifies the configuration of the authentication server, rendering the resulting scheme even more practical. Further, we extend the idea of password wrapped credential to biometric wrapped credential}, to achieve anonymous biometric authentication. As expected, biometric wrapped credentials help break the linear server-side computation barrier intrinsic in the standard setting of biometric authentication. Experimental results validate the feasibility of realizing efficient anonymous biometric authentication.
ACSAC'10提出的匿名密码认证方案采用非正统的密码包装凭证方法,将匿名密码认证提升为一种实用的原语,并正在标准化。在本文中,我们对该方案进行了改进,提出了一种新的“公钥抑制”方法来实现服务器指定的凭据可验证性,这是实现密码包装凭据概念的核心技术。除了更好的性能外,我们的新方法还简化了身份验证服务器的配置,使最终方案更加实用。进一步,我们将密码包装凭证的思想扩展到生物识别包装凭证,以实现匿名生物识别认证。正如预期的那样,生物识别包装凭证有助于打破生物识别身份验证标准设置中固有的线性服务器端计算障碍。实验结果验证了实现高效匿名生物特征认证的可行性。
{"title":"Credential Wrapping: From Anonymous Password Authentication to Anonymous Biometric Authentication","authors":"Yanjiang Yang, Haibing Lu, Joseph K. Liu, J. Weng, Youcheng Zhang, Jianying Zhou","doi":"10.1145/2897845.2897854","DOIUrl":"https://doi.org/10.1145/2897845.2897854","url":null,"abstract":"The anonymous password authentication scheme proposed in ACSAC'10 under an unorthodox approach of password wrapped credentials advanced anonymous password authentication to be a practically ready primitive, and it is being standardized. In this paper, we improve on that scheme by proposing a new method of \"public key suppression\" for achieving server-designated credential verifiability, a core technicality in materializing the concept of password wrapped credential. Besides better performance, our new method simplifies the configuration of the authentication server, rendering the resulting scheme even more practical. Further, we extend the idea of password wrapped credential to biometric wrapped credential}, to achieve anonymous biometric authentication. As expected, biometric wrapped credentials help break the linear server-side computation barrier intrinsic in the standard setting of biometric authentication. Experimental results validate the feasibility of realizing efficient anonymous biometric authentication.","PeriodicalId":166633,"journal":{"name":"Proceedings of the 11th ACM on Asia Conference on Computer and Communications Security","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123051024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Fault Attacks on Efficient Pairing Implementations 高效配对实现中的故障攻击
Pierre-Alain Fouque, Chen Qian
This paper studies the security of efficient pairing implementations with compressed and standard representations against fault attacks. We show that these attacks solve the Fixed Argument Pairing Inversion and recover the first or second argument of the pairing inputs if we can inject double-faults on the loop counters. Compared to the first attack of Page and Vercauteren on supersingular elliptic curves in characteristic three, these are the first attacks which address efficient pairing implementations. Most efficient Tate pairings are computed using a Miller loop followed by a Final Exponentiation. Many papers show how it is possible to invert only the Miller loop and a recent paper of Lashermes et al. at CHES 2013 shows how to invert only the final exponentiation. During a long time, the final exponentiation was used as a countermeasure against the inversion of the Miller loop. However, the CHES attack cannot be used to invert this step on efficient and concrete implementations. Indeed, the two first steps of the Final Exponentiation use the Frobenius map to compute them efficiently. The drawback of the CHES 2013 attack is that it only works if these steps are implemented using very expensive inversions, but in general, these inversions are computed by using a conjugate since elements at the end of the first exponentiation are unicity roots. If this natural implementation is used, the CHES 2013 attack is avoided since it requires to inject a fault so that the faulted elements are not unicity roots. Consequently, it is highly probable that for concrete implementations, this attack will not work. For the same reasons, it is not possible to invert the Final Exponentiation in case of compressed pairing and both methods (conjugate and compressed) were proposed by Lashermes et al. as countermeasures against their attack. Here, we demonstrate that we can solve the FAPI-1 and FAPI-2 problems for compressed and standard pairing implementations. We demonstrate the efficiency of our attacks by using simulations with Sage on concrete implementations.
本文研究了具有压缩和标准表示的高效配对实现对故障攻击的安全性。我们证明,如果我们可以在循环计数器上注入双故障,这些攻击解决了固定参数配对反转并恢复配对输入的第一个或第二个参数。与Page和Vercauteren在特征三中的超奇异椭圆曲线上的第一次攻击相比,这些攻击是第一次解决有效配对实现的攻击。最有效的Tate对是使用Miller循环和Final Exponentiation来计算的。许多论文展示了如何仅反转米勒环,Lashermes等人在2013年CHES上的一篇最新论文展示了如何仅反转最终幂次。在很长一段时间里,最终幂被用作对抗米勒循环反转的对策。然而,在有效和具体的实现中,不能使用CHES攻击来反转这一步。实际上,最终幂运算的前两个步骤使用了Frobenius映射来有效地计算它们。CHES 2013攻击的缺点是,它只有在使用非常昂贵的反转来实现这些步骤时才有效,但通常,这些反转是通过使用共轭来计算的,因为第一次幂的末尾的元素是唯一根。如果使用这种自然实现,则可以避免CHES 2013攻击,因为它需要注入一个故障,以便故障元素不是唯一根。因此,对于具体实现来说,这种攻击很可能不起作用。出于同样的原因,在压缩配对的情况下,不可能反转Final Exponentiation, Lashermes等人提出了两种方法(共轭和压缩)作为对抗它们攻击的对策。在这里,我们演示了我们可以解决压缩和标准配对实现的FAPI-1和FAPI-2问题。我们通过使用Sage对具体实现进行模拟来证明我们的攻击的效率。
{"title":"Fault Attacks on Efficient Pairing Implementations","authors":"Pierre-Alain Fouque, Chen Qian","doi":"10.1145/2897845.2897907","DOIUrl":"https://doi.org/10.1145/2897845.2897907","url":null,"abstract":"This paper studies the security of efficient pairing implementations with compressed and standard representations against fault attacks. We show that these attacks solve the Fixed Argument Pairing Inversion and recover the first or second argument of the pairing inputs if we can inject double-faults on the loop counters. Compared to the first attack of Page and Vercauteren on supersingular elliptic curves in characteristic three, these are the first attacks which address efficient pairing implementations. Most efficient Tate pairings are computed using a Miller loop followed by a Final Exponentiation. Many papers show how it is possible to invert only the Miller loop and a recent paper of Lashermes et al. at CHES 2013 shows how to invert only the final exponentiation. During a long time, the final exponentiation was used as a countermeasure against the inversion of the Miller loop. However, the CHES attack cannot be used to invert this step on efficient and concrete implementations. Indeed, the two first steps of the Final Exponentiation use the Frobenius map to compute them efficiently. The drawback of the CHES 2013 attack is that it only works if these steps are implemented using very expensive inversions, but in general, these inversions are computed by using a conjugate since elements at the end of the first exponentiation are unicity roots. If this natural implementation is used, the CHES 2013 attack is avoided since it requires to inject a fault so that the faulted elements are not unicity roots. Consequently, it is highly probable that for concrete implementations, this attack will not work. For the same reasons, it is not possible to invert the Final Exponentiation in case of compressed pairing and both methods (conjugate and compressed) were proposed by Lashermes et al. as countermeasures against their attack. Here, we demonstrate that we can solve the FAPI-1 and FAPI-2 problems for compressed and standard pairing implementations. We demonstrate the efficiency of our attacks by using simulations with Sage on concrete implementations.","PeriodicalId":166633,"journal":{"name":"Proceedings of the 11th ACM on Asia Conference on Computer and Communications Security","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127522810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
ORIGEN: Automatic Extraction of Offset-Revealing Instructions for Cross-Version Memory Analysis 用于跨版本内存分析的偏移显示指令的自动提取
Qian Feng, Aravind Prakash, Minghua Wang, Curtis Carmony, Heng Yin
Semantic gap is a prominent problem in raw memory analysis, especially in Virtual Machine Introspection (VMI) and memory forensics. For COTS software, common memory forensics and VMI tools rely on the so-called "data structure profiles" -- a mapping between the semantic variables and their relative offsets within the structure in the binary. Construction of such profiles requires the expert knowledge about the internal working of a specified software version. At most time, it requires considerable manual efforts, which often turns out to be a cumbersome process. In this paper, we propose a notion named "cross-version memory analysis", wherein our goal is to alleviate the process of profile construction for new versions of a software by transferring the knowledge from the model that has already been trained on its old version. To this end, we first identify such Offset Revealing Instructions (ORI) in a given software and then leverage the code search techniques to label ORIs in an unknown version of the same software. With labeled ORIs, we can localize the profile for the new version. We provide a proof-of-concept implementation called ORIGEN. The efficacy and efficiency of ORIGEN have been empirically verified by a number of softwares. The experimental results show that by conducting the ORI search within Windows XP SP0 and Linux 3.5.0, we can successfully recover the data structure profiles for Windows XP SP2, Vista, Win 7, and Linux 2.6.32, 3.8.0, 3.13.0, respectively. The systematical evaluation on 40 versions of OpenSSH demonstrates ORIGEN can achieve a precision of more than 90%. As a case study, we integrate ORIGEN into a VMI tool to automatically extract semantic information required for VMI. We develop two plugins to the Volatility memory forensic framework, one for OpenSSH session key extraction, the other for encrypted filesystem key extraction. Both of them can achieve the cross-version analysis by ORIGEN.
语义缺口是原始内存分析中的一个突出问题,特别是在虚拟机自省(VMI)和内存取证中。对于COTS软件,公共内存取证和VMI工具依赖于所谓的“数据结构配置文件”——语义变量和它们在二进制结构中的相对偏移量之间的映射。构建这样的概要文件需要对特定软件版本的内部工作有专业的了解。在大多数情况下,它需要大量的手工工作,这通常是一个繁琐的过程。在本文中,我们提出了一个名为“跨版本记忆分析”的概念,其中我们的目标是通过转移已经在旧版本上训练过的模型的知识来减轻软件新版本的概要构建过程。为此,我们首先在给定的软件中识别这样的偏移显示指令(ORI),然后利用代码搜索技术在同一软件的未知版本中标记ORI。有了标记的ori,我们可以为新版本定位概要文件。我们提供了一个名为ORIGEN的概念验证实现。ORIGEN的有效性和效率已通过多个软件进行了实证验证。实验结果表明,通过在Windows XP SP0和Linux 3.5.0中进行ORI搜索,我们可以成功地恢复Windows XP SP2、Vista、Win 7和Linux 2.6.32、3.8.0、3.13.0的数据结构轮廓。对40个版本OpenSSH的系统评估表明,ORIGEN可以达到90%以上的精度。作为案例研究,我们将ORIGEN集成到VMI工具中,以自动提取VMI所需的语义信息。我们为volatile内存取证框架开发了两个插件,一个用于OpenSSH会话密钥提取,另一个用于加密文件系统密钥提取。它们都可以通过ORIGEN实现跨版本分析。
{"title":"ORIGEN: Automatic Extraction of Offset-Revealing Instructions for Cross-Version Memory Analysis","authors":"Qian Feng, Aravind Prakash, Minghua Wang, Curtis Carmony, Heng Yin","doi":"10.1145/2897845.2897850","DOIUrl":"https://doi.org/10.1145/2897845.2897850","url":null,"abstract":"Semantic gap is a prominent problem in raw memory analysis, especially in Virtual Machine Introspection (VMI) and memory forensics. For COTS software, common memory forensics and VMI tools rely on the so-called \"data structure profiles\" -- a mapping between the semantic variables and their relative offsets within the structure in the binary. Construction of such profiles requires the expert knowledge about the internal working of a specified software version. At most time, it requires considerable manual efforts, which often turns out to be a cumbersome process. In this paper, we propose a notion named \"cross-version memory analysis\", wherein our goal is to alleviate the process of profile construction for new versions of a software by transferring the knowledge from the model that has already been trained on its old version. To this end, we first identify such Offset Revealing Instructions (ORI) in a given software and then leverage the code search techniques to label ORIs in an unknown version of the same software. With labeled ORIs, we can localize the profile for the new version. We provide a proof-of-concept implementation called ORIGEN. The efficacy and efficiency of ORIGEN have been empirically verified by a number of softwares. The experimental results show that by conducting the ORI search within Windows XP SP0 and Linux 3.5.0, we can successfully recover the data structure profiles for Windows XP SP2, Vista, Win 7, and Linux 2.6.32, 3.8.0, 3.13.0, respectively. The systematical evaluation on 40 versions of OpenSSH demonstrates ORIGEN can achieve a precision of more than 90%. As a case study, we integrate ORIGEN into a VMI tool to automatically extract semantic information required for VMI. We develop two plugins to the Volatility memory forensic framework, one for OpenSSH session key extraction, the other for encrypted filesystem key extraction. Both of them can achieve the cross-version analysis by ORIGEN.","PeriodicalId":166633,"journal":{"name":"Proceedings of the 11th ACM on Asia Conference on Computer and Communications Security","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128799855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
R-Droid: Leveraging Android App Analysis with Static Slice Optimization R-Droid:利用Android应用分析和静态切片优化
M. Backes, Sven Bugiel, Erik Derr, S. Gerling, Christian Hammer
Today's feature-rich smartphone apps intensively rely on access to highly sensitive (personal) data. This puts the user's privacy at risk of being violated by overly curious apps or libraries (like advertisements). Central app markets conceptually represent a first line of defense against such invasions of the user's privacy, but unfortunately we are still lacking full support for automatic analysis of apps' internal data flows and supporting analysts in statically assessing apps' behavior. In this paper we present a novel slice-optimization approach to leverage static analysis of Android applications. Building on top of precise application lifecycle models, we employ a slicing-based analysis to generate data-dependent statements for arbitrary points of interest in an application. As a result of our optimization, the produced slices are, on average, 49% smaller than standard slices, thus facilitating code understanding and result validation by security analysts. Moreover, by re-targeting strings, our approach enables automatic assessments for a larger number of use-cases than prior work. We consolidate our improvements on statically analyzing Android apps into a tool called R-Droid and conducted a large-scale data-leak analysis on a set of 22,700 Android apps from Google Play. R-Droid managed to identify a significantly larger set of potential privacy-violating information flows than previous work, including 2,157 sensitive flows of password-flagged UI widgets in 256 distinct apps.
如今功能丰富的智能手机应用程序高度依赖于访问高度敏感的(个人)数据。这将用户的隐私置于被过分好奇的应用程序或库(如广告)侵犯的风险之中。从概念上讲,中央应用市场是防止侵犯用户隐私的第一道防线,但不幸的是,我们仍然缺乏对应用内部数据流自动分析的全面支持,也缺乏对分析师静态评估应用行为的支持。在本文中,我们提出了一种新的切片优化方法来利用Android应用程序的静态分析。在精确的应用程序生命周期模型之上,我们采用基于切片的分析,为应用程序中的任意兴趣点生成与数据相关的语句。我们优化的结果是,生成的片段平均比标准片段小49%,从而便于安全分析人员理解代码和验证结果。此外,通过重新定位字符串,我们的方法能够对比以前工作更多的用例进行自动评估。我们将静态分析Android应用的改进整合到一个名为R-Droid的工具中,并对来自Google Play的22,700个Android应用进行了大规模的数据泄露分析。R-Droid成功地识别了比以前的工作更大的潜在侵犯隐私信息流,包括256个不同应用程序中2,157个带有密码标记的UI小部件的敏感流。
{"title":"R-Droid: Leveraging Android App Analysis with Static Slice Optimization","authors":"M. Backes, Sven Bugiel, Erik Derr, S. Gerling, Christian Hammer","doi":"10.1145/2897845.2897927","DOIUrl":"https://doi.org/10.1145/2897845.2897927","url":null,"abstract":"Today's feature-rich smartphone apps intensively rely on access to highly sensitive (personal) data. This puts the user's privacy at risk of being violated by overly curious apps or libraries (like advertisements). Central app markets conceptually represent a first line of defense against such invasions of the user's privacy, but unfortunately we are still lacking full support for automatic analysis of apps' internal data flows and supporting analysts in statically assessing apps' behavior. In this paper we present a novel slice-optimization approach to leverage static analysis of Android applications. Building on top of precise application lifecycle models, we employ a slicing-based analysis to generate data-dependent statements for arbitrary points of interest in an application. As a result of our optimization, the produced slices are, on average, 49% smaller than standard slices, thus facilitating code understanding and result validation by security analysts. Moreover, by re-targeting strings, our approach enables automatic assessments for a larger number of use-cases than prior work. We consolidate our improvements on statically analyzing Android apps into a tool called R-Droid and conducted a large-scale data-leak analysis on a set of 22,700 Android apps from Google Play. R-Droid managed to identify a significantly larger set of potential privacy-violating information flows than previous work, including 2,157 sensitive flows of password-flagged UI widgets in 256 distinct apps.","PeriodicalId":166633,"journal":{"name":"Proceedings of the 11th ACM on Asia Conference on Computer and Communications Security","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116737849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 35
期刊
Proceedings of the 11th ACM on Asia Conference on Computer and Communications Security
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1