In complex networks, filters may be applied at different nodes to control how packets flow. In this paper, we study how to locate filtering functionality within a network. We show how to enforce a set of security goals while allowing maximal service subject to the security constraints. To implement our results we present a tool that given a network specification and a set of control rules automatically localizes the filters and generates configurations for all the firewalls in the network. These configurations are implemented using an extension of Mignis - an open source tool to generate firewalls from declarative, semantically explicit configurations. Our contributions include a way to specify security goals for how packets traverse the network, an algorithm to distribute filtering functionality to different nodes in the network to enforce a given set of security goals, and a proof that the results are compatible with a Mignis-based semantics for network behavior.
{"title":"Localizing Firewall Security Policies","authors":"P. Adão, R. Focardi, J. Guttman, F. Luccio","doi":"10.1109/CSF.2016.21","DOIUrl":"https://doi.org/10.1109/CSF.2016.21","url":null,"abstract":"In complex networks, filters may be applied at different nodes to control how packets flow. In this paper, we study how to locate filtering functionality within a network. We show how to enforce a set of security goals while allowing maximal service subject to the security constraints. To implement our results we present a tool that given a network specification and a set of control rules automatically localizes the filters and generates configurations for all the firewalls in the network. These configurations are implemented using an extension of Mignis - an open source tool to generate firewalls from declarative, semantically explicit configurations. Our contributions include a way to specify security goals for how packets traverse the network, an algorithm to distribute filtering functionality to different nodes in the network to enforce a given set of security goals, and a proof that the results are compatible with a Mignis-based semantics for network behavior.","PeriodicalId":6500,"journal":{"name":"2016 IEEE 29th Computer Security Foundations Symposium (CSF)","volume":"77 1","pages":"194-209"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84055151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Enforcement of noninterference requires proving that an attacker's knowledge about the initial state remains the same after observing a program's public output. We propose a hybrid monitoring mechanism which dynamically evaluates the knowledge that is contained in program variables. To get a precise estimate of the knowledge, the monitor statically analyses non-executed branches. We show that our knowledge-based monitor can be combined with existing dynamic monitors for non-interference. A distinguishing feature of such a combination is that the combined monitor is provably more permissive than each mechanism taken separately. We demonstrate this by proposing a knowledge-enhanced version of a no-sensitive-upgrade (NSU) monitor. The monitor and its static analysis have been formalized and proved correct within the Coq proof assistant.
{"title":"Hybrid Monitoring of Attacker Knowledge","authors":"Frédéric Besson, Nataliia Bielova, T. Jensen","doi":"10.1109/CSF.2016.23","DOIUrl":"https://doi.org/10.1109/CSF.2016.23","url":null,"abstract":"Enforcement of noninterference requires proving that an attacker's knowledge about the initial state remains the same after observing a program's public output. We propose a hybrid monitoring mechanism which dynamically evaluates the knowledge that is contained in program variables. To get a precise estimate of the knowledge, the monitor statically analyses non-executed branches. We show that our knowledge-based monitor can be combined with existing dynamic monitors for non-interference. A distinguishing feature of such a combination is that the combined monitor is provably more permissive than each mechanism taken separately. We demonstrate this by proposing a knowledge-enhanced version of a no-sensitive-upgrade (NSU) monitor. The monitor and its static analysis have been formalized and proved correct within the Coq proof assistant.","PeriodicalId":6500,"journal":{"name":"2016 IEEE 29th Computer Security Foundations Symposium (CSF)","volume":"43 1","pages":"225-238"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77400935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Toby C. Murray, Robert Sison, Edward Pierzchalski, C. Rizkallah
Value-dependent noninterference allows the classification of program variables to depend on the contents of other variables, and therefore is able to express a range of data-dependent security policies. However, so far its static enforcement mechanisms for software have been limited either to progress-and termination-insensitive noninterference for sequential languages, or to concurrent message-passing programs without shared memory. Additionally, there exists no methodology for preserving value-dependent noninterference for shared memory programs under compositional refinement. This paper presents a flow-sensitive dependent type system for enforcing timing-sensitive value-dependent noninterference for shared memory concurrent programs, comprising a collection of sequential components, as well as a compositional refinement theory for preserving this property under componentwise refinement. Our results are mechanised in Isabelle/HOL.
{"title":"Compositional Verification and Refinement of Concurrent Value-Dependent Noninterference","authors":"Toby C. Murray, Robert Sison, Edward Pierzchalski, C. Rizkallah","doi":"10.1109/CSF.2016.36","DOIUrl":"https://doi.org/10.1109/CSF.2016.36","url":null,"abstract":"Value-dependent noninterference allows the classification of program variables to depend on the contents of other variables, and therefore is able to express a range of data-dependent security policies. However, so far its static enforcement mechanisms for software have been limited either to progress-and termination-insensitive noninterference for sequential languages, or to concurrent message-passing programs without shared memory. Additionally, there exists no methodology for preserving value-dependent noninterference for shared memory programs under compositional refinement. This paper presents a flow-sensitive dependent type system for enforcing timing-sensitive value-dependent noninterference for shared memory concurrent programs, comprising a collection of sequential components, as well as a compositional refinement theory for preserving this property under componentwise refinement. Our results are mechanised in Isabelle/HOL.","PeriodicalId":6500,"journal":{"name":"2016 IEEE 29th Computer Security Foundations Symposium (CSF)","volume":"40 1","pages":"417-431"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78105212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Motivated by the problem of understanding the difference between practical access control and capability systems formally, we distill the essence of both in a language-based setting. We first prove that access control systems and (object) capabilities are fundamentally different. We further study capabilities as an enforcement mechanism for confused deputy attacks (CDAs), since CDAs may have been the primary motivation for the invention of capabilities. To do this, we develop the first formal characterization of CDA-freedom in a language-based setting and describe its relation to standard information flow integrity. We show that, perhaps suprisingly, capabilities cannot prevent all CDAs. Next, we stipulate restrictions on programs under which capabilities ensure CDA-freedom and prove that the restrictions are sufficient. To relax those restrictions, we examine provenance semantics as sound CDA-freedom enforcement mechanisms.
{"title":"On Access Control, Capabilities, Their Equivalence, and Confused Deputy Attacks","authors":"Vineet Rajani, D. Garg, Tamara Rezk","doi":"10.1109/CSF.2016.18","DOIUrl":"https://doi.org/10.1109/CSF.2016.18","url":null,"abstract":"Motivated by the problem of understanding the difference between practical access control and capability systems formally, we distill the essence of both in a language-based setting. We first prove that access control systems and (object) capabilities are fundamentally different. We further study capabilities as an enforcement mechanism for confused deputy attacks (CDAs), since CDAs may have been the primary motivation for the invention of capabilities. To do this, we develop the first formal characterization of CDA-freedom in a language-based setting and describe its relation to standard information flow integrity. We show that, perhaps suprisingly, capabilities cannot prevent all CDAs. Next, we stipulate restrictions on programs under which capabilities ensure CDA-freedom and prove that the restrictions are sufficient. To relax those restrictions, we examine provenance semantics as sound CDA-freedom enforcement mechanisms.","PeriodicalId":6500,"journal":{"name":"2016 IEEE 29th Computer Security Foundations Symposium (CSF)","volume":"10 1","pages":"150-163"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88304754","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this work we study communication with a party whose secrets have already been compromised. At first sight, it may seem impossible to provide any type of security in this scenario. However, under some conditions, practically relevant guarantees can still be achieved. We call such guarantees "post-compromise security". We provide the first informal and formal definitions for post-compromise security, and show that it can be achieved in several scenarios. At a technical level, we instantiate our informal definitions in the setting of authenticated key exchange (AKE) protocols, and develop two new strong security models for two different threat models. We show that both of these security models can be satisfied, by proposing two concrete protocol constructions and proving they are secure in the models. Our work leads to crucial insights on how post-compromise security can (and cannot) be achieved, paving the way for applications in other domains.
{"title":"On Post-compromise Security","authors":"Katriel Cohn-Gordon, C. Cremers, L. Garratt","doi":"10.1109/CSF.2016.19","DOIUrl":"https://doi.org/10.1109/CSF.2016.19","url":null,"abstract":"In this work we study communication with a party whose secrets have already been compromised. At first sight, it may seem impossible to provide any type of security in this scenario. However, under some conditions, practically relevant guarantees can still be achieved. We call such guarantees \"post-compromise security\". We provide the first informal and formal definitions for post-compromise security, and show that it can be achieved in several scenarios. At a technical level, we instantiate our informal definitions in the setting of authenticated key exchange (AKE) protocols, and develop two new strong security models for two different threat models. We show that both of these security models can be satisfied, by proposing two concrete protocol constructions and proving they are secure in the models. Our work leads to crucial insights on how post-compromise security can (and cannot) be achieved, paving the way for applications in other domains.","PeriodicalId":6500,"journal":{"name":"2016 IEEE 29th Computer Security Foundations Symposium (CSF)","volume":"61 1","pages":"164-178"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83965504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present an analysis of key wrapping APIs with generic policies. We prove that certain minimal conditions on policies are sufficient for keys to be indistinguishable from random in any execution of an API. Our result captures a large class of API policies, including both the hierarchies on keys that are common in the scientific literature and the non-linear dependencies on keys used in PKCS#11. Indeed, we use our result to propose a secure refinement of PKCS#11, assuming that the attributes of keys are transmitted as authenticated associated data when wrapping and that there is an enforced separation between keys used for wrapping and keys used for other cryptographic purposes. We use the Computationally Complete Symbolic Attacker developed by Bana and Comon. This model enables us to obtain computational guarantees using a simple proof with a high degree of modularity.
{"title":"Analysis of Key Wrapping APIs: Generic Policies, Computational Security","authors":"Guillaume Scerri, Ryan Stanley-Oakes","doi":"10.1109/CSF.2016.27","DOIUrl":"https://doi.org/10.1109/CSF.2016.27","url":null,"abstract":"We present an analysis of key wrapping APIs with generic policies. We prove that certain minimal conditions on policies are sufficient for keys to be indistinguishable from random in any execution of an API. Our result captures a large class of API policies, including both the hierarchies on keys that are common in the scientific literature and the non-linear dependencies on keys used in PKCS#11. Indeed, we use our result to propose a secure refinement of PKCS#11, assuming that the attributes of keys are transmitted as authenticated associated data when wrapping and that there is an enforced separation between keys used for wrapping and keys used for other cryptographic purposes. We use the Computationally Complete Symbolic Attacker developed by Bana and Comon. This model enables us to obtain computational guarantees using a simple proof with a high degree of modularity.","PeriodicalId":6500,"journal":{"name":"2016 IEEE 29th Computer Security Foundations Symposium (CSF)","volume":"24 1","pages":"281-295"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90896994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Stefano Calzavara, R. Focardi, Niklas Grimm, Matteo Maffei
Micro-policies, originally proposed to implement hardware-level security monitors, constitute a flexible and general enforcement technique, based on assigning security tags to system components and taking security actions based on dynamic checks over these tags. In this paper, we present the first application of micro-policies to web security, by proposing a core browser model supporting them and studying its effectiveness at securing web sessions. In our view, web session security requirements are expressed in terms of a simple, declarative information flow policy, which is then automatically translated into a micro-policy enforcing it. This leads to a browser-side enforcement mechanism which is elegant, sound and flexible, while being accessible to web developers. We show how a large class of attacks against web sessions can be uniformly and effectively prevented by the adoption of this approach. We also develop a proof-of-concept implementation of a significant core of our proposal as a Google Chrome extension, Michrome: our experiments show that Michrome can be easily configured to enforce strong security policies without breaking the functionality of websites.
{"title":"Micro-policies for Web Session Security","authors":"Stefano Calzavara, R. Focardi, Niklas Grimm, Matteo Maffei","doi":"10.1109/CSF.2016.20","DOIUrl":"https://doi.org/10.1109/CSF.2016.20","url":null,"abstract":"Micro-policies, originally proposed to implement hardware-level security monitors, constitute a flexible and general enforcement technique, based on assigning security tags to system components and taking security actions based on dynamic checks over these tags. In this paper, we present the first application of micro-policies to web security, by proposing a core browser model supporting them and studying its effectiveness at securing web sessions. In our view, web session security requirements are expressed in terms of a simple, declarative information flow policy, which is then automatically translated into a micro-policy enforcing it. This leads to a browser-side enforcement mechanism which is elegant, sound and flexible, while being accessible to web developers. We show how a large class of attacks against web sessions can be uniformly and effectively prevented by the adoption of this approach. We also develop a proof-of-concept implementation of a significant core of our proposal as a Google Chrome extension, Michrome: our experiments show that Michrome can be easily configured to enforce strong security policies without breaking the functionality of websites.","PeriodicalId":6500,"journal":{"name":"2016 IEEE 29th Computer Security Foundations Symposium (CSF)","volume":"25 1","pages":"179-193"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86751324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Real-world applications routinely make authorization decisions based on dynamic computation. Reasoning about dynamically computed authority is challenging. Integrity of the system might be compromised if attackers can improperly influence the authorizing computation. Confidentiality can also be compromised by authorization, since authorization decisions are often based on sensitive data such as membership lists and passwords. Previous formal models for authorization do not fully address the security implications of permitting trust relationships to change, which limits their ability to reason about authority that derives from dynamic computation. Our goal is a way to construct dynamic authorization mechanisms that do not violate confidentiality or integrity. We introduce the Flow-Limited Authorization Calculus (FLAC), which is both a simple, expressive model for reasoning about dynamic authorization and also an information flow control language for securely implementing various authorization mechanisms. FLAC combines the insights of two previous models: it extends the Dependency Core Calculus with features made possible by the Flow-Limited Authorization Model. FLAC provides strong end-to-end information security guarantees even for programs that incorporate and implement rich dynamic authorization mechanisms. These guarantees include noninterference and robust declassification, which prevent attackers from influencing information disclosures in unauthorized ways. We prove these security properties formally for all FLAC programs and explore the expressiveness of FLAC with several examples.
{"title":"A Calculus for Flow-Limited Authorization","authors":"Owen Arden, A. Myers","doi":"10.1109/CSF.2016.17","DOIUrl":"https://doi.org/10.1109/CSF.2016.17","url":null,"abstract":"Real-world applications routinely make authorization decisions based on dynamic computation. Reasoning about dynamically computed authority is challenging. Integrity of the system might be compromised if attackers can improperly influence the authorizing computation. Confidentiality can also be compromised by authorization, since authorization decisions are often based on sensitive data such as membership lists and passwords. Previous formal models for authorization do not fully address the security implications of permitting trust relationships to change, which limits their ability to reason about authority that derives from dynamic computation. Our goal is a way to construct dynamic authorization mechanisms that do not violate confidentiality or integrity. We introduce the Flow-Limited Authorization Calculus (FLAC), which is both a simple, expressive model for reasoning about dynamic authorization and also an information flow control language for securely implementing various authorization mechanisms. FLAC combines the insights of two previous models: it extends the Dependency Core Calculus with features made possible by the Flow-Limited Authorization Model. FLAC provides strong end-to-end information security guarantees even for programs that incorporate and implement rich dynamic authorization mechanisms. These guarantees include noninterference and robust declassification, which prevent attackers from influencing information disclosures in unauthorized ways. We prove these security properties formally for all FLAC programs and explore the expressiveness of FLAC with several examples.","PeriodicalId":6500,"journal":{"name":"2016 IEEE 29th Computer Security Foundations Symposium (CSF)","volume":"1 1","pages":"135-149"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85281641","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Perfect secrecy describes cases where an adversary cannot learn anything about the secret beyond its prior distribution. A classical result by Shannon shows that a necessary condition for perfect secrecy is that the adversary should not be able to eliminate any of the possible secrets. In this paper we answer the following fundamental question: What is the lowest leakage of information that can be achieved when some of the secrets have to be eliminated? We address this question by deriving the minimum leakage in closed-form, and explicitly providing "universally optimal" randomized strategies, in the sense that they guarantee the minimum leakage irrespective of the measure of entropy used to quantify the leakage. We then introduce a generalization of Rényi family of asymmetric measures of leakage which generalizes the g-leakage and show that a slight modification of our strategies are optimal with respect to an important class of such measures. Subsequently, we show that our schemes constitute the Nash Equilibria of closely related two-person zero sum games. This game perspective provides implicit solutions for a wider set of structural constraints and asymmetric entropies. Finally we demonstrate how this work can also be seen as designing a universally optimal channel given a specified prior.
{"title":"Relative Perfect Secrecy: Universally Optimal Strategies and Channel Design","authors":"M. Khouzani, P. Malacaria","doi":"10.1109/CSF.2016.12","DOIUrl":"https://doi.org/10.1109/CSF.2016.12","url":null,"abstract":"Perfect secrecy describes cases where an adversary cannot learn anything about the secret beyond its prior distribution. A classical result by Shannon shows that a necessary condition for perfect secrecy is that the adversary should not be able to eliminate any of the possible secrets. In this paper we answer the following fundamental question: What is the lowest leakage of information that can be achieved when some of the secrets have to be eliminated? We address this question by deriving the minimum leakage in closed-form, and explicitly providing \"universally optimal\" randomized strategies, in the sense that they guarantee the minimum leakage irrespective of the measure of entropy used to quantify the leakage. We then introduce a generalization of Rényi family of asymmetric measures of leakage which generalizes the g-leakage and show that a slight modification of our strategies are optimal with respect to an important class of such measures. Subsequently, we show that our schemes constitute the Nash Equilibria of closely related two-person zero sum games. This game perspective provides implicit solutions for a wider set of structural constraints and asymmetric entropies. Finally we demonstrate how this work can also be seen as designing a universally optimal channel given a specified prior.","PeriodicalId":6500,"journal":{"name":"2016 IEEE 29th Computer Security Foundations Symposium (CSF)","volume":"23 1","pages":"61-76"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85151668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The problem of secure software licensing is to enforce meaningful restrictions on how software is run on machines outside the control of the software author/vendor. The problem has been addressed through a variety of approaches from software obfuscation to hardware-based solutions, but existent solutions offer only heuristic guarantees which are often invalidated by attacks. This paper establishes foundations for secure software licensing in the form of rigorous models. We identify and formalize two key properties. Privacy demands that licensed software does not leak unwanted information, and integrity ensures that the use of licensed software is compliant with a license - the license is a parameter of our models. Our formal definitions and proposed constructions leverage the isolation/attestation capabilities of recently proposed trusted hardware like SGX which proves to be a key enabling technology for provably secure software licensing.
{"title":"Secure Software Licensing: Models, Constructions, and Proofs","authors":"S. Costea, B. Warinschi","doi":"10.1109/CSF.2016.10","DOIUrl":"https://doi.org/10.1109/CSF.2016.10","url":null,"abstract":"The problem of secure software licensing is to enforce meaningful restrictions on how software is run on machines outside the control of the software author/vendor. The problem has been addressed through a variety of approaches from software obfuscation to hardware-based solutions, but existent solutions offer only heuristic guarantees which are often invalidated by attacks. This paper establishes foundations for secure software licensing in the form of rigorous models. We identify and formalize two key properties. Privacy demands that licensed software does not leak unwanted information, and integrity ensures that the use of licensed software is compliant with a license - the license is a parameter of our models. Our formal definitions and proposed constructions leverage the isolation/attestation capabilities of recently proposed trusted hardware like SGX which proves to be a key enabling technology for provably secure software licensing.","PeriodicalId":6500,"journal":{"name":"2016 IEEE 29th Computer Security Foundations Symposium (CSF)","volume":"40 1","pages":"31-44"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91051696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}