Cheng Huang, R. Lu, Hui Zhu, Jun Shao, Xiaodong Lin
With the evolving of ehealthcare industry, electronic health records (EHRs), as one of the digital health records stored and managed by patients, have been regarded to provide more benefits. With the EHRs, patients can conveniently share health records with doctors and build up a complete picture of their health. However, due to the sensitivity of EHRs, how to guarantee the security and privacy of EHRs becomes one of the most important issues concerned by patients. To tackle these privacy challenges such as how to make a fine-grained access control on the shared EHRs, how to keep the confidentiality of EHRs stored in cloud, how to audit EHRs and how to find the suitable doctors for patients, in this paper, we propose a fine-grained EHRs sharing scheme via similarity-based recommendation accelerated by Locality Sensitive Hashing (LSH) in cloud-assisted ehealthcare system, called FSSR. Specifically, our proposed scheme allows patients to securely share their EHRs with some suitable doctors under fine-grained privacy access control. Detailed security analysis confirms its security prosperities. In addition, extensive simulations by developing a prototype of FSSR are also conducted, and the performance evaluations demonstrate the FSSR's effectiveness in terms of computational cost, storage and communication cost while minimizing the privacy disclosure.
{"title":"FSSR: Fine-Grained EHRs Sharing via Similarity-Based Recommendation in Cloud-Assisted eHealthcare System","authors":"Cheng Huang, R. Lu, Hui Zhu, Jun Shao, Xiaodong Lin","doi":"10.1145/2897845.2897870","DOIUrl":"https://doi.org/10.1145/2897845.2897870","url":null,"abstract":"With the evolving of ehealthcare industry, electronic health records (EHRs), as one of the digital health records stored and managed by patients, have been regarded to provide more benefits. With the EHRs, patients can conveniently share health records with doctors and build up a complete picture of their health. However, due to the sensitivity of EHRs, how to guarantee the security and privacy of EHRs becomes one of the most important issues concerned by patients. To tackle these privacy challenges such as how to make a fine-grained access control on the shared EHRs, how to keep the confidentiality of EHRs stored in cloud, how to audit EHRs and how to find the suitable doctors for patients, in this paper, we propose a fine-grained EHRs sharing scheme via similarity-based recommendation accelerated by Locality Sensitive Hashing (LSH) in cloud-assisted ehealthcare system, called FSSR. Specifically, our proposed scheme allows patients to securely share their EHRs with some suitable doctors under fine-grained privacy access control. Detailed security analysis confirms its security prosperities. In addition, extensive simulations by developing a prototype of FSSR are also conducted, and the performance evaluations demonstrate the FSSR's effectiveness in terms of computational cost, storage and communication cost while minimizing the privacy disclosure.","PeriodicalId":166633,"journal":{"name":"Proceedings of the 11th ACM on Asia Conference on Computer and Communications Security","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131216310","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Peng Xu, Jun Xu, Wei Wang, Hai Jin, W. Susilo, Deqing Zou
Proxy Re-Encryption (PRE) is a favorable primitive to realize a cryptographic cloud with secure and flexible data sharing mechanism. A number of PRE schemes with versatile capabilities have been proposed for different applications. The secure data sharing can be internally achieved in each PRE scheme. But no previous work can guarantee the secure data sharing among different PRE schemes in a general manner. Moreover, it is challenging to solve this problem due to huge differences among the existing PRE schemes in their algebraic systems and public-key types. To solve this problem more generally, this paper uniforms the definitions of the existing PRE and Public Key Encryption (PKE) schemes, and further uniforms their security definitions. Then taking any uniformly defined PRE scheme and any uniformly defined PKE scheme as two building blocks, this paper constructs a Generally Hybrid Proxy Re-Encryption (GHPRE) scheme with the idea of temporary public and private keys to achieve secure data sharing between these two underlying schemes. Since PKE is a more general definition than PRE, the proposed GHPRE scheme also is workable between any two PRE schemes. Moreover, the proposed GHPRE scheme can be transparently deployed even if the underlying PRE schemes are implementing.
{"title":"Generally Hybrid Proxy Re-Encryption: A Secure Data Sharing among Cryptographic Clouds","authors":"Peng Xu, Jun Xu, Wei Wang, Hai Jin, W. Susilo, Deqing Zou","doi":"10.1145/2897845.2897923","DOIUrl":"https://doi.org/10.1145/2897845.2897923","url":null,"abstract":"Proxy Re-Encryption (PRE) is a favorable primitive to realize a cryptographic cloud with secure and flexible data sharing mechanism. A number of PRE schemes with versatile capabilities have been proposed for different applications. The secure data sharing can be internally achieved in each PRE scheme. But no previous work can guarantee the secure data sharing among different PRE schemes in a general manner. Moreover, it is challenging to solve this problem due to huge differences among the existing PRE schemes in their algebraic systems and public-key types. To solve this problem more generally, this paper uniforms the definitions of the existing PRE and Public Key Encryption (PKE) schemes, and further uniforms their security definitions. Then taking any uniformly defined PRE scheme and any uniformly defined PKE scheme as two building blocks, this paper constructs a Generally Hybrid Proxy Re-Encryption (GHPRE) scheme with the idea of temporary public and private keys to achieve secure data sharing between these two underlying schemes. Since PKE is a more general definition than PRE, the proposed GHPRE scheme also is workable between any two PRE schemes. Moreover, the proposed GHPRE scheme can be transparently deployed even if the underlying PRE schemes are implementing.","PeriodicalId":166633,"journal":{"name":"Proceedings of the 11th ACM on Asia Conference on Computer and Communications Security","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114281940","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we propose Mastino, a novel defense system to detect malware download events. A download event is a 3-tuple that identifies the action of downloading a file from a URL that was triggered by a client (machine). Mastino utilizes global situation awareness and continuously monitors various network- and system-level events of the clients' machines across the Internet and provides real time classification of both files and URLs to the clients upon submission of a new, unknown file or URL to the system. To enable detection of the download events, Mastino builds a large download graph that captures the subtle relationships among the entities of download events, i.e. files, URLs, and machines. We implemented a prototype version of Mastino and evaluated it in a large-scale real-world deployment. Our experimental evaluation shows that Mastino can accurately classify malware download events with an average of 95.5% true positive (TP), while incurring less than 0.5% false positives (FP). In addition, we show the Mastino can classify a new download event as either benign or malware in just a fraction of a second, and is therefore suitable as a real time defense system.
{"title":"Real-Time Detection of Malware Downloads via Large-Scale URL->File->Machine Graph Mining","authors":"Babak Rahbarinia, Marco Balduzzi, R. Perdisci","doi":"10.1145/2897845.2897918","DOIUrl":"https://doi.org/10.1145/2897845.2897918","url":null,"abstract":"In this paper we propose Mastino, a novel defense system to detect malware download events. A download event is a 3-tuple that identifies the action of downloading a file from a URL that was triggered by a client (machine). Mastino utilizes global situation awareness and continuously monitors various network- and system-level events of the clients' machines across the Internet and provides real time classification of both files and URLs to the clients upon submission of a new, unknown file or URL to the system. To enable detection of the download events, Mastino builds a large download graph that captures the subtle relationships among the entities of download events, i.e. files, URLs, and machines. We implemented a prototype version of Mastino and evaluated it in a large-scale real-world deployment. Our experimental evaluation shows that Mastino can accurately classify malware download events with an average of 95.5% true positive (TP), while incurring less than 0.5% false positives (FP). In addition, we show the Mastino can classify a new download event as either benign or malware in just a fraction of a second, and is therefore suitable as a real time defense system.","PeriodicalId":166633,"journal":{"name":"Proceedings of the 11th ACM on Asia Conference on Computer and Communications Security","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127854562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Large graph datasets have become invaluable assets for studying problems in business applications and scientific research. These datasets, collected and owned by data owners, may also contain privacy-sensitive information. When using public clouds for elastic processing, data owners have to protect both data ownership and privacy from curious cloud providers. We propose a cloud-centric framework that allows data owners to efficiently collect graph data from the distributed data contributors, and privately store and analyze graph data in the cloud. Data owners can conduct expensive operations in untrusted public clouds with privacy and scalability preserved. The major contributions of this work include two privacy-preserving approximate eigen decomposition algorithms (the secure Lanczos and Nystrom methods) for spectral analysis of large graph matrices, and a personalized privacy-preserving data submission method based on differential privacy that allows for the trade-off between data sparsity and privacy. For a N-node graph, the proposed approach allows a data owner to finish the core operations with only O(N) client-side costs in computation, storage, and communication. The expensive O(N2) operations are performed in the cloud with the proposed privacy-preserving algorithms. We prove that our approach can satisfactorily preserve data privacy against the untrusted cloud providers. We have conducted an extensive experimental study to investigate these algorithms in terms of the intrinsic relationships among costs, privacy, scalability, and result quality.
{"title":"Privacy-Preserving Spectral Analysis of Large Graphs in Public Clouds","authors":"Sagar Sharma, James Powers, Keke Chen","doi":"10.1145/2897845.2897857","DOIUrl":"https://doi.org/10.1145/2897845.2897857","url":null,"abstract":"Large graph datasets have become invaluable assets for studying problems in business applications and scientific research. These datasets, collected and owned by data owners, may also contain privacy-sensitive information. When using public clouds for elastic processing, data owners have to protect both data ownership and privacy from curious cloud providers. We propose a cloud-centric framework that allows data owners to efficiently collect graph data from the distributed data contributors, and privately store and analyze graph data in the cloud. Data owners can conduct expensive operations in untrusted public clouds with privacy and scalability preserved. The major contributions of this work include two privacy-preserving approximate eigen decomposition algorithms (the secure Lanczos and Nystrom methods) for spectral analysis of large graph matrices, and a personalized privacy-preserving data submission method based on differential privacy that allows for the trade-off between data sparsity and privacy. For a N-node graph, the proposed approach allows a data owner to finish the core operations with only O(N) client-side costs in computation, storage, and communication. The expensive O(N2) operations are performed in the cloud with the proposed privacy-preserving algorithms. We prove that our approach can satisfactorily preserve data privacy against the untrusted cloud providers. We have conducted an extensive experimental study to investigate these algorithms in terms of the intrinsic relationships among costs, privacy, scalability, and result quality.","PeriodicalId":166633,"journal":{"name":"Proceedings of the 11th ACM on Asia Conference on Computer and Communications Security","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133960121","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yanli Ren, Ning Ding, Xinpeng Zhang, Haining Lu, Dawu Gu
The problem of securely outsourcing computation has received widespread attention due to the development of cloud computing and mobile devices. In this paper, we first propose a secure verifiable outsourcing algorithm of single modular exponentiation based on the one-malicious model of two untrusted servers. The outsourcer could detect any failure with probability 1 if one of the servers misbehaves. We also present the other verifiable outsourcing algorithm for multiple modular exponentiations based on the same model. Compared with the state-of-the-art algorithms, the proposed algorithms improve both checkability and efficiency for the outsourcer. Finally, we utilize the proposed algorithms as two subroutines to achieve outsource-secure polynomial evaluation and ciphertext-policy attributed-based encryption (CP-ABE) scheme with verifiable outsourced encryption and decryption.
{"title":"Verifiable Outsourcing Algorithms for Modular Exponentiations with Improved Checkability","authors":"Yanli Ren, Ning Ding, Xinpeng Zhang, Haining Lu, Dawu Gu","doi":"10.1145/2897845.2897881","DOIUrl":"https://doi.org/10.1145/2897845.2897881","url":null,"abstract":"The problem of securely outsourcing computation has received widespread attention due to the development of cloud computing and mobile devices. In this paper, we first propose a secure verifiable outsourcing algorithm of single modular exponentiation based on the one-malicious model of two untrusted servers. The outsourcer could detect any failure with probability 1 if one of the servers misbehaves. We also present the other verifiable outsourcing algorithm for multiple modular exponentiations based on the same model. Compared with the state-of-the-art algorithms, the proposed algorithms improve both checkability and efficiency for the outsourcer. Finally, we utilize the proposed algorithms as two subroutines to achieve outsource-secure polynomial evaluation and ciphertext-policy attributed-based encryption (CP-ABE) scheme with verifiable outsourced encryption and decryption.","PeriodicalId":166633,"journal":{"name":"Proceedings of the 11th ACM on Asia Conference on Computer and Communications Security","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115640598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Malicious domains are key components to a variety of cyber attacks. Several recent techniques are proposed to identify malicious domains through analysis of DNS data. The general approach is to build classifiers based on DNS-related local domain features. One potential problem is that many local features, e.g., domain name patterns and temporal patterns, tend to be not robust. Attackers could easily alter these features to evade detection without affecting much their attack capabilities. In this paper, we take a complementary approach. Instead of focusing on local features, we propose to discover and analyze global associations among domains. The key challenges are (1) to build meaningful associations among domains; and (2) to use these associations to reason about the potential maliciousness of domains. For the first challenge, we take advantage of the modus operandi of attackers. To avoid detection, malicious domains exhibit dynamic behavior by, for example, frequently changing the malicious domain-IP resolutions and creating new domains. This makes it very likely for attackers to reuse resources. It is indeed commonly observed that over a period of time multiple malicious domains are hosted on the same IPs and multiple IPs host the same malicious domains, which creates intrinsic association among them. For the second challenge, we develop a graph-based inference technique over associated domains. Our approach is based on the intuition that a domain having strong associations with known malicious domains is likely to be malicious. Carefully established associations enable the discovery of a large set of new malicious domains using a very small set of previously known malicious ones. Our experiments over a public passive DNS database show that the proposed technique can achieve high true positive rates (over 95%) while maintaining low false positive rates (less than 0.5%). Further, even with a small set of known malicious domains (a couple of hundreds), our technique can discover a large set of potential malicious domains (in the scale of up to tens of thousands).
{"title":"Discovering Malicious Domains through Passive DNS Data Graph Analysis","authors":"Issa M. Khalil, Ting Yu, Bei Guan","doi":"10.1145/2897845.2897877","DOIUrl":"https://doi.org/10.1145/2897845.2897877","url":null,"abstract":"Malicious domains are key components to a variety of cyber attacks. Several recent techniques are proposed to identify malicious domains through analysis of DNS data. The general approach is to build classifiers based on DNS-related local domain features. One potential problem is that many local features, e.g., domain name patterns and temporal patterns, tend to be not robust. Attackers could easily alter these features to evade detection without affecting much their attack capabilities. In this paper, we take a complementary approach. Instead of focusing on local features, we propose to discover and analyze global associations among domains. The key challenges are (1) to build meaningful associations among domains; and (2) to use these associations to reason about the potential maliciousness of domains. For the first challenge, we take advantage of the modus operandi of attackers. To avoid detection, malicious domains exhibit dynamic behavior by, for example, frequently changing the malicious domain-IP resolutions and creating new domains. This makes it very likely for attackers to reuse resources. It is indeed commonly observed that over a period of time multiple malicious domains are hosted on the same IPs and multiple IPs host the same malicious domains, which creates intrinsic association among them. For the second challenge, we develop a graph-based inference technique over associated domains. Our approach is based on the intuition that a domain having strong associations with known malicious domains is likely to be malicious. Carefully established associations enable the discovery of a large set of new malicious domains using a very small set of previously known malicious ones. Our experiments over a public passive DNS database show that the proposed technique can achieve high true positive rates (over 95%) while maintaining low false positive rates (less than 0.5%). Further, even with a small set of known malicious domains (a couple of hundreds), our technique can discover a large set of potential malicious domains (in the scale of up to tens of thousands).","PeriodicalId":166633,"journal":{"name":"Proceedings of the 11th ACM on Asia Conference on Computer and Communications Security","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116513369","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yanjiang Yang, Haibing Lu, Joseph K. Liu, J. Weng, Youcheng Zhang, Jianying Zhou
The anonymous password authentication scheme proposed in ACSAC'10 under an unorthodox approach of password wrapped credentials advanced anonymous password authentication to be a practically ready primitive, and it is being standardized. In this paper, we improve on that scheme by proposing a new method of "public key suppression" for achieving server-designated credential verifiability, a core technicality in materializing the concept of password wrapped credential. Besides better performance, our new method simplifies the configuration of the authentication server, rendering the resulting scheme even more practical. Further, we extend the idea of password wrapped credential to biometric wrapped credential}, to achieve anonymous biometric authentication. As expected, biometric wrapped credentials help break the linear server-side computation barrier intrinsic in the standard setting of biometric authentication. Experimental results validate the feasibility of realizing efficient anonymous biometric authentication.
{"title":"Credential Wrapping: From Anonymous Password Authentication to Anonymous Biometric Authentication","authors":"Yanjiang Yang, Haibing Lu, Joseph K. Liu, J. Weng, Youcheng Zhang, Jianying Zhou","doi":"10.1145/2897845.2897854","DOIUrl":"https://doi.org/10.1145/2897845.2897854","url":null,"abstract":"The anonymous password authentication scheme proposed in ACSAC'10 under an unorthodox approach of password wrapped credentials advanced anonymous password authentication to be a practically ready primitive, and it is being standardized. In this paper, we improve on that scheme by proposing a new method of \"public key suppression\" for achieving server-designated credential verifiability, a core technicality in materializing the concept of password wrapped credential. Besides better performance, our new method simplifies the configuration of the authentication server, rendering the resulting scheme even more practical. Further, we extend the idea of password wrapped credential to biometric wrapped credential}, to achieve anonymous biometric authentication. As expected, biometric wrapped credentials help break the linear server-side computation barrier intrinsic in the standard setting of biometric authentication. Experimental results validate the feasibility of realizing efficient anonymous biometric authentication.","PeriodicalId":166633,"journal":{"name":"Proceedings of the 11th ACM on Asia Conference on Computer and Communications Security","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123051024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper studies the security of efficient pairing implementations with compressed and standard representations against fault attacks. We show that these attacks solve the Fixed Argument Pairing Inversion and recover the first or second argument of the pairing inputs if we can inject double-faults on the loop counters. Compared to the first attack of Page and Vercauteren on supersingular elliptic curves in characteristic three, these are the first attacks which address efficient pairing implementations. Most efficient Tate pairings are computed using a Miller loop followed by a Final Exponentiation. Many papers show how it is possible to invert only the Miller loop and a recent paper of Lashermes et al. at CHES 2013 shows how to invert only the final exponentiation. During a long time, the final exponentiation was used as a countermeasure against the inversion of the Miller loop. However, the CHES attack cannot be used to invert this step on efficient and concrete implementations. Indeed, the two first steps of the Final Exponentiation use the Frobenius map to compute them efficiently. The drawback of the CHES 2013 attack is that it only works if these steps are implemented using very expensive inversions, but in general, these inversions are computed by using a conjugate since elements at the end of the first exponentiation are unicity roots. If this natural implementation is used, the CHES 2013 attack is avoided since it requires to inject a fault so that the faulted elements are not unicity roots. Consequently, it is highly probable that for concrete implementations, this attack will not work. For the same reasons, it is not possible to invert the Final Exponentiation in case of compressed pairing and both methods (conjugate and compressed) were proposed by Lashermes et al. as countermeasures against their attack. Here, we demonstrate that we can solve the FAPI-1 and FAPI-2 problems for compressed and standard pairing implementations. We demonstrate the efficiency of our attacks by using simulations with Sage on concrete implementations.
{"title":"Fault Attacks on Efficient Pairing Implementations","authors":"Pierre-Alain Fouque, Chen Qian","doi":"10.1145/2897845.2897907","DOIUrl":"https://doi.org/10.1145/2897845.2897907","url":null,"abstract":"This paper studies the security of efficient pairing implementations with compressed and standard representations against fault attacks. We show that these attacks solve the Fixed Argument Pairing Inversion and recover the first or second argument of the pairing inputs if we can inject double-faults on the loop counters. Compared to the first attack of Page and Vercauteren on supersingular elliptic curves in characteristic three, these are the first attacks which address efficient pairing implementations. Most efficient Tate pairings are computed using a Miller loop followed by a Final Exponentiation. Many papers show how it is possible to invert only the Miller loop and a recent paper of Lashermes et al. at CHES 2013 shows how to invert only the final exponentiation. During a long time, the final exponentiation was used as a countermeasure against the inversion of the Miller loop. However, the CHES attack cannot be used to invert this step on efficient and concrete implementations. Indeed, the two first steps of the Final Exponentiation use the Frobenius map to compute them efficiently. The drawback of the CHES 2013 attack is that it only works if these steps are implemented using very expensive inversions, but in general, these inversions are computed by using a conjugate since elements at the end of the first exponentiation are unicity roots. If this natural implementation is used, the CHES 2013 attack is avoided since it requires to inject a fault so that the faulted elements are not unicity roots. Consequently, it is highly probable that for concrete implementations, this attack will not work. For the same reasons, it is not possible to invert the Final Exponentiation in case of compressed pairing and both methods (conjugate and compressed) were proposed by Lashermes et al. as countermeasures against their attack. Here, we demonstrate that we can solve the FAPI-1 and FAPI-2 problems for compressed and standard pairing implementations. We demonstrate the efficiency of our attacks by using simulations with Sage on concrete implementations.","PeriodicalId":166633,"journal":{"name":"Proceedings of the 11th ACM on Asia Conference on Computer and Communications Security","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127522810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Semantic gap is a prominent problem in raw memory analysis, especially in Virtual Machine Introspection (VMI) and memory forensics. For COTS software, common memory forensics and VMI tools rely on the so-called "data structure profiles" -- a mapping between the semantic variables and their relative offsets within the structure in the binary. Construction of such profiles requires the expert knowledge about the internal working of a specified software version. At most time, it requires considerable manual efforts, which often turns out to be a cumbersome process. In this paper, we propose a notion named "cross-version memory analysis", wherein our goal is to alleviate the process of profile construction for new versions of a software by transferring the knowledge from the model that has already been trained on its old version. To this end, we first identify such Offset Revealing Instructions (ORI) in a given software and then leverage the code search techniques to label ORIs in an unknown version of the same software. With labeled ORIs, we can localize the profile for the new version. We provide a proof-of-concept implementation called ORIGEN. The efficacy and efficiency of ORIGEN have been empirically verified by a number of softwares. The experimental results show that by conducting the ORI search within Windows XP SP0 and Linux 3.5.0, we can successfully recover the data structure profiles for Windows XP SP2, Vista, Win 7, and Linux 2.6.32, 3.8.0, 3.13.0, respectively. The systematical evaluation on 40 versions of OpenSSH demonstrates ORIGEN can achieve a precision of more than 90%. As a case study, we integrate ORIGEN into a VMI tool to automatically extract semantic information required for VMI. We develop two plugins to the Volatility memory forensic framework, one for OpenSSH session key extraction, the other for encrypted filesystem key extraction. Both of them can achieve the cross-version analysis by ORIGEN.
语义缺口是原始内存分析中的一个突出问题,特别是在虚拟机自省(VMI)和内存取证中。对于COTS软件,公共内存取证和VMI工具依赖于所谓的“数据结构配置文件”——语义变量和它们在二进制结构中的相对偏移量之间的映射。构建这样的概要文件需要对特定软件版本的内部工作有专业的了解。在大多数情况下,它需要大量的手工工作,这通常是一个繁琐的过程。在本文中,我们提出了一个名为“跨版本记忆分析”的概念,其中我们的目标是通过转移已经在旧版本上训练过的模型的知识来减轻软件新版本的概要构建过程。为此,我们首先在给定的软件中识别这样的偏移显示指令(ORI),然后利用代码搜索技术在同一软件的未知版本中标记ORI。有了标记的ori,我们可以为新版本定位概要文件。我们提供了一个名为ORIGEN的概念验证实现。ORIGEN的有效性和效率已通过多个软件进行了实证验证。实验结果表明,通过在Windows XP SP0和Linux 3.5.0中进行ORI搜索,我们可以成功地恢复Windows XP SP2、Vista、Win 7和Linux 2.6.32、3.8.0、3.13.0的数据结构轮廓。对40个版本OpenSSH的系统评估表明,ORIGEN可以达到90%以上的精度。作为案例研究,我们将ORIGEN集成到VMI工具中,以自动提取VMI所需的语义信息。我们为volatile内存取证框架开发了两个插件,一个用于OpenSSH会话密钥提取,另一个用于加密文件系统密钥提取。它们都可以通过ORIGEN实现跨版本分析。
{"title":"ORIGEN: Automatic Extraction of Offset-Revealing Instructions for Cross-Version Memory Analysis","authors":"Qian Feng, Aravind Prakash, Minghua Wang, Curtis Carmony, Heng Yin","doi":"10.1145/2897845.2897850","DOIUrl":"https://doi.org/10.1145/2897845.2897850","url":null,"abstract":"Semantic gap is a prominent problem in raw memory analysis, especially in Virtual Machine Introspection (VMI) and memory forensics. For COTS software, common memory forensics and VMI tools rely on the so-called \"data structure profiles\" -- a mapping between the semantic variables and their relative offsets within the structure in the binary. Construction of such profiles requires the expert knowledge about the internal working of a specified software version. At most time, it requires considerable manual efforts, which often turns out to be a cumbersome process. In this paper, we propose a notion named \"cross-version memory analysis\", wherein our goal is to alleviate the process of profile construction for new versions of a software by transferring the knowledge from the model that has already been trained on its old version. To this end, we first identify such Offset Revealing Instructions (ORI) in a given software and then leverage the code search techniques to label ORIs in an unknown version of the same software. With labeled ORIs, we can localize the profile for the new version. We provide a proof-of-concept implementation called ORIGEN. The efficacy and efficiency of ORIGEN have been empirically verified by a number of softwares. The experimental results show that by conducting the ORI search within Windows XP SP0 and Linux 3.5.0, we can successfully recover the data structure profiles for Windows XP SP2, Vista, Win 7, and Linux 2.6.32, 3.8.0, 3.13.0, respectively. The systematical evaluation on 40 versions of OpenSSH demonstrates ORIGEN can achieve a precision of more than 90%. As a case study, we integrate ORIGEN into a VMI tool to automatically extract semantic information required for VMI. We develop two plugins to the Volatility memory forensic framework, one for OpenSSH session key extraction, the other for encrypted filesystem key extraction. Both of them can achieve the cross-version analysis by ORIGEN.","PeriodicalId":166633,"journal":{"name":"Proceedings of the 11th ACM on Asia Conference on Computer and Communications Security","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128799855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Backes, Sven Bugiel, Erik Derr, S. Gerling, Christian Hammer
Today's feature-rich smartphone apps intensively rely on access to highly sensitive (personal) data. This puts the user's privacy at risk of being violated by overly curious apps or libraries (like advertisements). Central app markets conceptually represent a first line of defense against such invasions of the user's privacy, but unfortunately we are still lacking full support for automatic analysis of apps' internal data flows and supporting analysts in statically assessing apps' behavior. In this paper we present a novel slice-optimization approach to leverage static analysis of Android applications. Building on top of precise application lifecycle models, we employ a slicing-based analysis to generate data-dependent statements for arbitrary points of interest in an application. As a result of our optimization, the produced slices are, on average, 49% smaller than standard slices, thus facilitating code understanding and result validation by security analysts. Moreover, by re-targeting strings, our approach enables automatic assessments for a larger number of use-cases than prior work. We consolidate our improvements on statically analyzing Android apps into a tool called R-Droid and conducted a large-scale data-leak analysis on a set of 22,700 Android apps from Google Play. R-Droid managed to identify a significantly larger set of potential privacy-violating information flows than previous work, including 2,157 sensitive flows of password-flagged UI widgets in 256 distinct apps.
{"title":"R-Droid: Leveraging Android App Analysis with Static Slice Optimization","authors":"M. Backes, Sven Bugiel, Erik Derr, S. Gerling, Christian Hammer","doi":"10.1145/2897845.2897927","DOIUrl":"https://doi.org/10.1145/2897845.2897927","url":null,"abstract":"Today's feature-rich smartphone apps intensively rely on access to highly sensitive (personal) data. This puts the user's privacy at risk of being violated by overly curious apps or libraries (like advertisements). Central app markets conceptually represent a first line of defense against such invasions of the user's privacy, but unfortunately we are still lacking full support for automatic analysis of apps' internal data flows and supporting analysts in statically assessing apps' behavior. In this paper we present a novel slice-optimization approach to leverage static analysis of Android applications. Building on top of precise application lifecycle models, we employ a slicing-based analysis to generate data-dependent statements for arbitrary points of interest in an application. As a result of our optimization, the produced slices are, on average, 49% smaller than standard slices, thus facilitating code understanding and result validation by security analysts. Moreover, by re-targeting strings, our approach enables automatic assessments for a larger number of use-cases than prior work. We consolidate our improvements on statically analyzing Android apps into a tool called R-Droid and conducted a large-scale data-leak analysis on a set of 22,700 Android apps from Google Play. R-Droid managed to identify a significantly larger set of potential privacy-violating information flows than previous work, including 2,157 sensitive flows of password-flagged UI widgets in 256 distinct apps.","PeriodicalId":166633,"journal":{"name":"Proceedings of the 11th ACM on Asia Conference on Computer and Communications Security","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116737849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}