Conference on Computer and Communications Security : proceedings of the ... conference on computer and communications security. ACM Conference on Computer and Communications Security最新文献
In this work, we investigate the application of geometric representation of hash vectors of the information packets in multicast authentication protocols. To this end, a new authentication approach based on geometric properties of hash vectors in an $n-$dimensional vector space is proposed. The proposed approach enables the receiver to authenticate the source packets and removes malicious packets that may have been injected by an adversary into the channel. A salient feature of the proposed scheme is that its bandwidth overhead is independent from the number of injected packets. Moreover, the performance analysis verifies that the proposed scheme significantly reduces the bandwidth overhead as compared to the well known multicast authentication protocols in the literature (e.g., PRABS).
{"title":"Poster: a geometric approach for multicast authentication in adversarial channels","authors":"Seyed Ali Ahmadzadeh, G. Agnew","doi":"10.1145/2046707.2093479","DOIUrl":"https://doi.org/10.1145/2046707.2093479","url":null,"abstract":"In this work, we investigate the application of geometric representation of hash vectors of the information packets in multicast authentication protocols. To this end, a new authentication approach based on geometric properties of hash vectors in an $n-$dimensional vector space is proposed. The proposed approach enables the receiver to authenticate the source packets and removes malicious packets that may have been injected by an adversary into the channel. A salient feature of the proposed scheme is that its bandwidth overhead is independent from the number of injected packets. Moreover, the performance analysis verifies that the proposed scheme significantly reduces the bandwidth overhead as compared to the well known multicast authentication protocols in the literature (e.g., PRABS).","PeriodicalId":72687,"journal":{"name":"Conference on Computer and Communications Security : proceedings of the ... conference on computer and communications security. ACM Conference on Computer and Communications Security","volume":"33 1","pages":"729-732"},"PeriodicalIF":0.0,"publicationDate":"2011-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80270390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Intrusion Detection Systems (IDSs) are designed to monitor network traffic and computer activities in order to alert users about suspicious intrusions. Collaboration among IDSs allows users to benefit from the collective knowledge and information from their collaborators and achieve more accurate intrusion detection. However, most existing collaborative intrusion detection networks rely on the exchange of intrusion data which raises privacy concerns. To overcome this problem, we propose SMURFEN: a knowledge-based intrusion detection network, which provides a platform for IDS users to effectively share their customized detection knowledge in an IDS community. An automatic knowledge propagation mechanism is proposed based on a decentralized two-level optimization problem formulation, leading to a Nash equilibrium solution which is proved to be scalable, incentive compatible, fair, efficient and robust.
{"title":"Poster: SMURFEN: a rule sharing collaborative intrusion detection network","authors":"Carol J. Fung, Quanyan Zhu, R. Boutaba, T. Başar","doi":"10.1145/2046707.2093487","DOIUrl":"https://doi.org/10.1145/2046707.2093487","url":null,"abstract":"Intrusion Detection Systems (IDSs) are designed to monitor network traffic and computer activities in order to alert users about suspicious intrusions. Collaboration among IDSs allows users to benefit from the collective knowledge and information from their collaborators and achieve more accurate intrusion detection. However, most existing collaborative intrusion detection networks rely on the exchange of intrusion data which raises privacy concerns. To overcome this problem, we propose SMURFEN: a knowledge-based intrusion detection network, which provides a platform for IDS users to effectively share their customized detection knowledge in an IDS community. An automatic knowledge propagation mechanism is proposed based on a decentralized two-level optimization problem formulation, leading to a Nash equilibrium solution which is proved to be scalable, incentive compatible, fair, efficient and robust.","PeriodicalId":72687,"journal":{"name":"Conference on Computer and Communications Security : proceedings of the ... conference on computer and communications security. ACM Conference on Computer and Communications Security","volume":"6 1","pages":"761-764"},"PeriodicalIF":0.0,"publicationDate":"2011-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84315793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Decentralized information flow control (DIFC) is a recent important innovation with flexible mechanisms to improve the availability of traditional information flow models. However, the flexibility of DIFC models also makes specifying and managing DIFC policies a challenging problem. The formal policy verification techniques can improve the current state of the art of policy specification and management. We show that in general these problems of policy verification of the main DIFC systems are NP-hard, and show that several subcases remain NP-complete. We also propose an approach of model checking to solve these problems. Experiments are presented to show that this approach is effective.
{"title":"Poster: towards formal verification of DIFC policies","authors":"Zhi Yang, Lihua Yin, Miyi Duan, Shuyuan Jin","doi":"10.1145/2046707.2093515","DOIUrl":"https://doi.org/10.1145/2046707.2093515","url":null,"abstract":"Decentralized information flow control (DIFC) is a recent important innovation with flexible mechanisms to improve the availability of traditional information flow models. However, the flexibility of DIFC models also makes specifying and managing DIFC policies a challenging problem. The formal policy verification techniques can improve the current state of the art of policy specification and management. We show that in general these problems of policy verification of the main DIFC systems are NP-hard, and show that several subcases remain NP-complete. We also propose an approach of model checking to solve these problems. Experiments are presented to show that this approach is effective.","PeriodicalId":72687,"journal":{"name":"Conference on Computer and Communications Security : proceedings of the ... conference on computer and communications security. ACM Conference on Computer and Communications Security","volume":"44 1","pages":"873-876"},"PeriodicalIF":0.0,"publicationDate":"2011-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81042108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The smart grid introduces concerns for the loss of consumer privacy; recently deployed smart meters retain and distribute highly accurate profiles of home energy use. These profiles can be mined by Non Intrusive Load Monitors (NILMs) to expose much of the human activity within the served site. This paper introduces a new class of algorithms and systems, called Non Intrusive Load Leveling (NILL) to combat potential invasions of privacy. NILL uses an in-residence battery to mask variance in load on the grid, thus eliminating exposure of the appliance-driven information used to compromise consumer privacy. We use real residential energy use profiles to drive four simulated deployments of NILL. The simulations show that NILL exposes only 1.1 to 5.9 useful energy events per day hidden amongst hundreds or thousands of similar battery-suppressed events. Thus, the energy profiles exhibited by NILL are largely useless for current NILM algorithms. Surprisingly, such privacy gains can be achieved using battery systems whose storage capacity is far lower than the residence's aggregate load average. We conclude by discussing how the costs of NILL can be offset by energy savings under tiered energy schedules.
{"title":"Protecting consumer privacy from electric load monitoring","authors":"Stephen E. McLaughlin, P. Mcdaniel, W. Aiello","doi":"10.1145/2046707.2046720","DOIUrl":"https://doi.org/10.1145/2046707.2046720","url":null,"abstract":"The smart grid introduces concerns for the loss of consumer privacy; recently deployed smart meters retain and distribute highly accurate profiles of home energy use. These profiles can be mined by Non Intrusive Load Monitors (NILMs) to expose much of the human activity within the served site. This paper introduces a new class of algorithms and systems, called Non Intrusive Load Leveling (NILL) to combat potential invasions of privacy. NILL uses an in-residence battery to mask variance in load on the grid, thus eliminating exposure of the appliance-driven information used to compromise consumer privacy. We use real residential energy use profiles to drive four simulated deployments of NILL. The simulations show that NILL exposes only 1.1 to 5.9 useful energy events per day hidden amongst hundreds or thousands of similar battery-suppressed events. Thus, the energy profiles exhibited by NILL are largely useless for current NILM algorithms. Surprisingly, such privacy gains can be achieved using battery systems whose storage capacity is far lower than the residence's aggregate load average. We conclude by discussing how the costs of NILL can be offset by energy savings under tiered energy schedules.","PeriodicalId":72687,"journal":{"name":"Conference on Computer and Communications Security : proceedings of the ... conference on computer and communications security. ACM Conference on Computer and Communications Security","volume":"12 1","pages":"87-98"},"PeriodicalIF":0.0,"publicationDate":"2011-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82135689","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recent rapid malware growth has exposed the limitations of traditional in-host malware-defense systems and motivated the development of secure virtualization-based out-of-VM solutions. By running vulnerable systems as virtual machines (VMs) and moving security software from inside the VMs to outside, the out-of-VM solutions securely isolate the anti-malware software from the vulnerable system. However, the presence of semantic gap also leads to the compatibility problem in not supporting existing defense software. In this paper, we present process out-grafting, an architectural approach to address both isolation and compatibility challenges in out-of-VM approaches for fine-grained process-level execution monitoring. Specifically, by relocating a suspect process from inside a VM to run side-by-side with the out-of-VM security tool, our technique effectively removes the semantic gap and supports existing user-mode process monitoring tools without any modification. Moreover, by forwarding the system calls back to the VM, we can smoothly continue the execution of the out-grafted process without weakening the isolation of the monitoring tool. We have developed a KVM-based prototype and used it to natively support a number of existing tools without any modification. The evaluation results including measurement with benchmark programs show it is effective and practical with a small performance overhead.
{"title":"Process out-grafting: an efficient \"out-of-VM\" approach for fine-grained process execution monitoring","authors":"D. Srinivasan, Zhi Wang, Xuxian Jiang, Dongyan Xu","doi":"10.1145/2046707.2046751","DOIUrl":"https://doi.org/10.1145/2046707.2046751","url":null,"abstract":"Recent rapid malware growth has exposed the limitations of traditional in-host malware-defense systems and motivated the development of secure virtualization-based out-of-VM solutions. By running vulnerable systems as virtual machines (VMs) and moving security software from inside the VMs to outside, the out-of-VM solutions securely isolate the anti-malware software from the vulnerable system. However, the presence of semantic gap also leads to the compatibility problem in not supporting existing defense software. In this paper, we present process out-grafting, an architectural approach to address both isolation and compatibility challenges in out-of-VM approaches for fine-grained process-level execution monitoring. Specifically, by relocating a suspect process from inside a VM to run side-by-side with the out-of-VM security tool, our technique effectively removes the semantic gap and supports existing user-mode process monitoring tools without any modification. Moreover, by forwarding the system calls back to the VM, we can smoothly continue the execution of the out-grafted process without weakening the isolation of the monitoring tool. We have developed a KVM-based prototype and used it to natively support a number of existing tools without any modification. The evaluation results including measurement with benchmark programs show it is effective and practical with a small performance overhead.","PeriodicalId":72687,"journal":{"name":"Conference on Computer and Communications Security : proceedings of the ... conference on computer and communications security. ACM Conference on Computer and Communications Security","volume":"26 1","pages":"363-374"},"PeriodicalIF":0.0,"publicationDate":"2011-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76400829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Modern web applications combine content from several sources (with varying security characteristics), and incorporate significant portion of user-supplied contents to enrich browsing experience. However, the de facto web protection model, the same-origin policy (SOP), has not adequately evolved to manage the security consequences of this additional complexity. As a result, making web applications subject to a broad sphere of attacks (cross-site scripting, cross-site request forgery and others). The fundamental problem is the failure of access control. To solve this, in this work, we present DIEGO, a new fine-grained access control model for web browsers. Our overall design approach is to combine mandatory access-control (MAC) principles of operating system with tag pairing isolation technique in order to provide stealthy protection. To support backwards compatibility, DIEGO defaults to the same-origin policy (SOP) for web applications.
{"title":"Poster: DIEGO: a fine-grained access control for web browsers","authors":"Ashar Javed","doi":"10.1145/2046707.2093494","DOIUrl":"https://doi.org/10.1145/2046707.2093494","url":null,"abstract":"Modern web applications combine content from several sources (with varying security characteristics), and incorporate significant portion of user-supplied contents to enrich browsing experience. However, the de facto web protection model, the same-origin policy (SOP), has not adequately evolved to manage the security consequences of this additional complexity. As a result, making web applications subject to a broad sphere of attacks (cross-site scripting, cross-site request forgery and others). The fundamental problem is the failure of access control. To solve this, in this work, we present DIEGO, a new fine-grained access control model for web browsers. Our overall design approach is to combine mandatory access-control (MAC) principles of operating system with tag pairing isolation technique in order to provide stealthy protection. To support backwards compatibility, DIEGO defaults to the same-origin policy (SOP) for web applications.","PeriodicalId":72687,"journal":{"name":"Conference on Computer and Communications Security : proceedings of the ... conference on computer and communications security. ACM Conference on Computer and Communications Security","volume":"61 1","pages":"789-792"},"PeriodicalIF":0.0,"publicationDate":"2011-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76843188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Timing channels remain a difficult and important problem for information security. Recent work introduced predictive mitigation, a new way to mitigating leakage through timing channels; this mechanism works by predicting timing from past behavior, and then enforcing the predictions. This paper generalizes predictive mitigation to a larger and important class of systems: systems that receive input requests from multiple clients and deliver responses. The new insight is that timing predictions may be a function of any public information, rather than being a function simply of output events. Based on this insight, a more general mechanism and theory of predictive mitigation becomes possible. The result is that bounds on timing leakage can be tightened, achieving asymptotically logarithmic leakage under reasonable assumptions. By applying it to web applications, the generalized predictive mitigation mechanism is shown to be effective in practice.
{"title":"Predictive mitigation of timing channels in interactive systems","authors":"Danfeng Zhang, Aslan Askarov, A. Myers","doi":"10.1145/2046707.2046772","DOIUrl":"https://doi.org/10.1145/2046707.2046772","url":null,"abstract":"Timing channels remain a difficult and important problem for information security. Recent work introduced predictive mitigation, a new way to mitigating leakage through timing channels; this mechanism works by predicting timing from past behavior, and then enforcing the predictions. This paper generalizes predictive mitigation to a larger and important class of systems: systems that receive input requests from multiple clients and deliver responses. The new insight is that timing predictions may be a function of any public information, rather than being a function simply of output events. Based on this insight, a more general mechanism and theory of predictive mitigation becomes possible. The result is that bounds on timing leakage can be tightened, achieving asymptotically logarithmic leakage under reasonable assumptions. By applying it to web applications, the generalized predictive mitigation mechanism is shown to be effective in practice.","PeriodicalId":72687,"journal":{"name":"Conference on Computer and Communications Security : proceedings of the ... conference on computer and communications security. ACM Conference on Computer and Communications Security","volume":"62 1","pages":"563-574"},"PeriodicalIF":0.0,"publicationDate":"2011-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74119985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cloaking is a common 'bait-and-switch' technique used to hide the true nature of a Web site by delivering blatantly different semantic content to different user segments. It is often used in search engine optimization (SEO) to obtain user traffic illegitimately for scams. In this paper, we measure and characterize the prevalence of cloaking on different search engines, how this behavior changes for targeted versus untargeted advertising and ultimately the response to site cloaking by search engine providers. Using a custom crawler, called Dagger, we track both popular search terms (e.g., as identified by Google, Alexa and Twitter) and targeted keywords (focused on pharmaceutical products) for over five months, identifying when distinct results were provided to crawlers and browsers. We further track the lifetime of cloaked search results as well as the sites they point to, demonstrating that cloakers can expect to maintain their pages in search results for several days on popular search engines and maintain the pages themselves for longer still.
{"title":"Cloak and dagger: dynamics of web search cloaking","authors":"David Y. Wang, S. Savage, G. Voelker","doi":"10.1145/2046707.2046763","DOIUrl":"https://doi.org/10.1145/2046707.2046763","url":null,"abstract":"Cloaking is a common 'bait-and-switch' technique used to hide the true nature of a Web site by delivering blatantly different semantic content to different user segments. It is often used in search engine optimization (SEO) to obtain user traffic illegitimately for scams. In this paper, we measure and characterize the prevalence of cloaking on different search engines, how this behavior changes for targeted versus untargeted advertising and ultimately the response to site cloaking by search engine providers. Using a custom crawler, called Dagger, we track both popular search terms (e.g., as identified by Google, Alexa and Twitter) and targeted keywords (focused on pharmaceutical products) for over five months, identifying when distinct results were provided to crawlers and browsers. We further track the lifetime of cloaked search results as well as the sites they point to, demonstrating that cloakers can expect to maintain their pages in search results for several days on popular search engines and maintain the pages themselves for longer still.","PeriodicalId":72687,"journal":{"name":"Conference on Computer and Communications Security : proceedings of the ... conference on computer and communications security. ACM Conference on Computer and Communications Security","volume":"14 1","pages":"477-490"},"PeriodicalIF":0.0,"publicationDate":"2011-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74354458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Malware continues to remain one of the most important security problems on the Internet today. Whenever an anti-malware solution becomes popular, malware authors typically react promptly and modify their programs to evade defense mechanisms. For example, recently, malware authors have increasingly started to create malicious code that can evade dynamic analysis. One recent form of evasion against dynamic analysis systems is stalling code. Stalling code is typically executed before any malicious behavior. The attacker's aim is to delay the execution of the malicious activity long enough so that an automated dynamic analysis system fails to extract the interesting malicious behavior. This paper presents the first approach to detect and mitigate malicious stalling code, and to ensure forward progress within the amount of time allocated for the analysis of a sample. Experimental results show that our system, called HASTEN, works well in practice, and that it is able to detect additional malicious behavior in real-world malware samples.
{"title":"The power of procrastination: detection and mitigation of execution-stalling malicious code","authors":"C. Kolbitsch, E. Kirda, Christopher Krügel","doi":"10.1145/2046707.2046740","DOIUrl":"https://doi.org/10.1145/2046707.2046740","url":null,"abstract":"Malware continues to remain one of the most important security problems on the Internet today. Whenever an anti-malware solution becomes popular, malware authors typically react promptly and modify their programs to evade defense mechanisms. For example, recently, malware authors have increasingly started to create malicious code that can evade dynamic analysis.\u0000 One recent form of evasion against dynamic analysis systems is stalling code. Stalling code is typically executed before any malicious behavior. The attacker's aim is to delay the execution of the malicious activity long enough so that an automated dynamic analysis system fails to extract the interesting malicious behavior. This paper presents the first approach to detect and mitigate malicious stalling code, and to ensure forward progress within the amount of time allocated for the analysis of a sample. Experimental results show that our system, called HASTEN, works well in practice, and that it is able to detect additional malicious behavior in real-world malware samples.","PeriodicalId":72687,"journal":{"name":"Conference on Computer and Communications Security : proceedings of the ... conference on computer and communications security. ACM Conference on Computer and Communications Security","volume":"11 1","pages":"285-296"},"PeriodicalIF":0.0,"publicationDate":"2011-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78363735","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Miro Enev, Sidhant Gupta, Tadayoshi Kohno, Shwetak N. Patel
We conduct an extensive study of information leakage over the powerline infrastructure from eight televisions (TVs) spanning multiple makes, models, and underlying technologies. In addition to being of scientific interest, our findings contribute to the overall debate of whether or not measurements of residential powerlines reveal significant information about the activities within a home. We find that the power supplies of modern TVs produce discernible electromagnetic interference (EMI) signatures that are indicative of the video content being displayed. We measure the stability of these signatures over time and across multiple instances of the same TV model, as well as the robustness of these signatures in the presence of other noisy electronic devices connected to the same powerline.
{"title":"Televisions, video privacy, and powerline electromagnetic interference","authors":"Miro Enev, Sidhant Gupta, Tadayoshi Kohno, Shwetak N. Patel","doi":"10.1145/2046707.2046770","DOIUrl":"https://doi.org/10.1145/2046707.2046770","url":null,"abstract":"We conduct an extensive study of information leakage over the powerline infrastructure from eight televisions (TVs) spanning multiple makes, models, and underlying technologies. In addition to being of scientific interest, our findings contribute to the overall debate of whether or not measurements of residential powerlines reveal significant information about the activities within a home. We find that the power supplies of modern TVs produce discernible electromagnetic interference (EMI) signatures that are indicative of the video content being displayed. We measure the stability of these signatures over time and across multiple instances of the same TV model, as well as the robustness of these signatures in the presence of other noisy electronic devices connected to the same powerline.","PeriodicalId":72687,"journal":{"name":"Conference on Computer and Communications Security : proceedings of the ... conference on computer and communications security. ACM Conference on Computer and Communications Security","volume":"9 1","pages":"537-550"},"PeriodicalIF":0.0,"publicationDate":"2011-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82097373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Conference on Computer and Communications Security : proceedings of the ... conference on computer and communications security. ACM Conference on Computer and Communications Security