We present an automated verification method for security of Diffie-Hellman-based key exchange protocols. The method includes a Hoare-style logic and syntactic checking. The method is applied to protocols in a simplified version of the Bellare-Rogaway-Pointcheval model (2000). The security of the protocol in the complete model can be established automatically by a modular proof technique of Kudla and Paterson (2005).
{"title":"Automated Proofs for Diffie-Hellman-Based Key Exchanges","authors":"Long Ngo, C. Boyd, J. G. Nieto","doi":"10.1109/CSF.2011.11","DOIUrl":"https://doi.org/10.1109/CSF.2011.11","url":null,"abstract":"We present an automated verification method for security of Diffie-Hellman-based key exchange protocols. The method includes a Hoare-style logic and syntactic checking. The method is applied to protocols in a simplified version of the Bellare-Rogaway-Pointcheval model (2000). The security of the protocol in the complete model can be established automatically by a modular proof technique of Kudla and Paterson (2005).","PeriodicalId":364995,"journal":{"name":"2011 IEEE 24th Computer Security Foundations Symposium","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122403632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
François Dupressoir, A. Gordon, J. Jürjens, D. Naumann
We describe how to verify security properties of C code for cryptographic protocols by using a general-purpose verifier. We prove security theorems in the symbolic model of cryptography. Our techniques include: use of ghost state to attach formal algebraic terms to concrete byte arrays and to detect collisions when two distinct terms map to the same byte array, decoration of a crypto API with contracts based on symbolic terms, and expression of the attacker model in terms of C programs. We rely on the general-purpose verifier VCC, we guide VCC to prove security simply by writing suitable header files and annotations in implementation files, rather than by changing VCC itself. We formalize the symbolic model in Coq in order to justify the addition of axioms to VCC.
{"title":"Guiding a General-Purpose C Verifier to Prove Cryptographic Protocols","authors":"François Dupressoir, A. Gordon, J. Jürjens, D. Naumann","doi":"10.1109/CSF.2011.8","DOIUrl":"https://doi.org/10.1109/CSF.2011.8","url":null,"abstract":"We describe how to verify security properties of C code for cryptographic protocols by using a general-purpose verifier. We prove security theorems in the symbolic model of cryptography. Our techniques include: use of ghost state to attach formal algebraic terms to concrete byte arrays and to detect collisions when two distinct terms map to the same byte array, decoration of a crypto API with contracts based on symbolic terms, and expression of the attacker model in terms of C programs. We rely on the general-purpose verifier VCC, we guide VCC to prove security simply by writing suitable header files and annotations in implementation files, rather than by changing VCC itself. We formalize the symbolic model in Coq in order to justify the addition of axioms to VCC.","PeriodicalId":364995,"journal":{"name":"2011 IEEE 24th Computer Security Foundations Symposium","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115978523","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The security of key exchange and secure channel protocols, such as TLS, has been studied intensively. However, only few works have considered what happens when the established keys are actuallyused -- to run some protocol securely over the established "channel". We call this a vertical protocol composition, and it is truly commonplace in today's communication with the diversity of VPNs and secure browser sessions. In fact, it is normal that we have several layers of secure channels: For instance, on top of a VPN-connection, a browser may establish another secure channel (possibly with a different end point). Even using the same protocol several times in such a stack of channels is not unusual: An application may very well establish another TLS channel over an established one. We call this self-composition. In fact, there is nothing that tells us that all these compositions are sound, i.e., that the combination cannot introduce attacks that the individual protocols in isolation do not have. In this work, we prove a composability result in the symbolic model that allows for arbitrary vertical composition (including self-composition). It holds for protocols from any suite of channel and application protocols that fulfills a number of sufficient preconditions. These preconditions are satisfied for many practically relevant protocols such as TLS.
{"title":"Vertical Protocol Composition","authors":"Thomas Gross, S. Mödersheim","doi":"10.1109/CSF.2011.23","DOIUrl":"https://doi.org/10.1109/CSF.2011.23","url":null,"abstract":"The security of key exchange and secure channel protocols, such as TLS, has been studied intensively. However, only few works have considered what happens when the established keys are actuallyused -- to run some protocol securely over the established \"channel\". We call this a vertical protocol composition, and it is truly commonplace in today's communication with the diversity of VPNs and secure browser sessions. In fact, it is normal that we have several layers of secure channels: For instance, on top of a VPN-connection, a browser may establish another secure channel (possibly with a different end point). Even using the same protocol several times in such a stack of channels is not unusual: An application may very well establish another TLS channel over an established one. We call this self-composition. In fact, there is nothing that tells us that all these compositions are sound, i.e., that the combination cannot introduce attacks that the individual protocols in isolation do not have. In this work, we prove a composability result in the symbolic model that allows for arbitrary vertical composition (including self-composition). It holds for protocols from any suite of channel and application protocols that fulfills a number of sufficient preconditions. These preconditions are satisfied for many practically relevant protocols such as TLS.","PeriodicalId":364995,"journal":{"name":"2011 IEEE 24th Computer Security Foundations Symposium","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132804635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Bugliesi, Stefano Calzavara, F. Eigner, Matteo Maffei
Type systems for authorization are a popular device for the specification and verification of security properties in cryptographic applications. Though promising, existing frameworks exhibit limited expressive power, as the underlying specification languages fail to account for powerful notions of authorization based on access counts, usage bounds, and mechanisms of resource consumption, which instead characterize most of the modern online services and applications. We present a new type system that features a novel combination of affine logic, refinement types, and types for cryptography, to support the verification of resource-aware security policies. The type system allows us to analyze a number of cryptographic protocol patterns and security properties, which are out of reach for existing verification frameworks based on static analysis.
{"title":"Resource-Aware Authorization Policies for Statically Typed Cryptographic Protocols","authors":"M. Bugliesi, Stefano Calzavara, F. Eigner, Matteo Maffei","doi":"10.1109/CSF.2011.13","DOIUrl":"https://doi.org/10.1109/CSF.2011.13","url":null,"abstract":"Type systems for authorization are a popular device for the specification and verification of security properties in cryptographic applications. Though promising, existing frameworks exhibit limited expressive power, as the underlying specification languages fail to account for powerful notions of authorization based on access counts, usage bounds, and mechanisms of resource consumption, which instead characterize most of the modern online services and applications. We present a new type system that features a novel combination of affine logic, refinement types, and types for cryptography, to support the verification of resource-aware security policies. The type system allows us to analyze a number of cryptographic protocol patterns and security properties, which are out of reach for existing verification frameworks based on static analysis.","PeriodicalId":364995,"journal":{"name":"2011 IEEE 24th Computer Security Foundations Symposium","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126830885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Helios 2.0 is an open-source web-based end-to-end verifiable electronic voting system, suitable for use in low-coercion environments. In this paper, we analyse ballot secrecy and discover a vulnerability which allows an adversary to compromise the privacy of voters. This vulnerability has been successfully exploited to break privacy in a mock election using the current Helios implementation. Moreover, the feasibility of an attack is considered in the context of French legislative elections and, based upon our findings, we believe it constitutes a real threat to ballot secrecy in such settings. Finally, we present a fix and show that our solution satisfies a formal definition of ballot secrecy using the applied pi calculus.
{"title":"Attacking and Fixing Helios: An Analysis of Ballot Secrecy","authors":"V. Cortier, B. Smyth","doi":"10.3233/JCS-2012-0458","DOIUrl":"https://doi.org/10.3233/JCS-2012-0458","url":null,"abstract":"Helios 2.0 is an open-source web-based end-to-end verifiable electronic voting system, suitable for use in low-coercion environments. In this paper, we analyse ballot secrecy and discover a vulnerability which allows an adversary to compromise the privacy of voters. This vulnerability has been successfully exploited to break privacy in a mock election using the current Helios implementation. Moreover, the feasibility of an attack is considered in the context of French legislative elections and, based upon our findings, we believe it constitutes a real threat to ballot secrecy in such settings. Finally, we present a fix and show that our solution satisfies a formal definition of ballot secrecy using the applied pi calculus.","PeriodicalId":364995,"journal":{"name":"2011 IEEE 24th Computer Security Foundations Symposium","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132252779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
There are two active and independent lines of research that aim at quantifying the amount of information that is disclosed by computing on confidential data. Each line of research has developed its own notion of confidentiality: on the one hand, differential privacy is the emerging consensus guarantee used for privacy-preserving data analysis. On the other hand, information-theoretic notions of leakage are used for characterizing the confidentiality properties of programs in language-based settings. The purpose of this article is to establish formal connections between both notions of confidentiality, and to compare them in terms of the security guarantees they deliver. We obtain the following results. First, we establish upper bounds for the leakage of every eps-differentially private mechanism in terms of eps and the size of the mechanism's input domain. We achieve this by identifying and leveraging a connection to coding theory. Second, we construct a class of eps-differentially private channels whose leakage grows with the size of their input domains. Using these channels, we show that there cannot be domain-size-independent bounds for the leakage of all eps-differentially private mechanisms. Moreover, we perform an empirical evaluation that shows that the leakage of these channels almost matches our theoretical upper bounds, demonstrating the accuracy of these bounds. Finally, we show that the question of providing optimal upper bounds for the leakage of eps-differentially private mechanisms in terms of rational functions of eps is in fact decidable.
{"title":"Information-Theoretic Bounds for Differentially Private Mechanisms","authors":"G. Barthe, Boris Köpf","doi":"10.1109/CSF.2011.20","DOIUrl":"https://doi.org/10.1109/CSF.2011.20","url":null,"abstract":"There are two active and independent lines of research that aim at quantifying the amount of information that is disclosed by computing on confidential data. Each line of research has developed its own notion of confidentiality: on the one hand, differential privacy is the emerging consensus guarantee used for privacy-preserving data analysis. On the other hand, information-theoretic notions of leakage are used for characterizing the confidentiality properties of programs in language-based settings. The purpose of this article is to establish formal connections between both notions of confidentiality, and to compare them in terms of the security guarantees they deliver. We obtain the following results. First, we establish upper bounds for the leakage of every eps-differentially private mechanism in terms of eps and the size of the mechanism's input domain. We achieve this by identifying and leveraging a connection to coding theory. Second, we construct a class of eps-differentially private channels whose leakage grows with the size of their input domains. Using these channels, we show that there cannot be domain-size-independent bounds for the leakage of all eps-differentially private mechanisms. Moreover, we perform an empirical evaluation that shows that the leakage of these channels almost matches our theoretical upper bounds, demonstrating the accuracy of these bounds. Finally, we show that the question of providing optimal upper bounds for the leakage of eps-differentially private mechanisms in terms of rational functions of eps is in fact decidable.","PeriodicalId":364995,"journal":{"name":"2011 IEEE 24th Computer Security Foundations Symposium","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127286102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Piotr (Peter) Mardziel, Stephen Magill, M. Hicks, M. Srivatsa
This paper explores the idea of knowledge-based security policies, which are used to decide whether to answer queries over secret data based on an estimation of the querier's (possibly increased) knowledge given the results. Limiting knowledge is the goal of existing information release policies that employ mechanisms such as noising, anonymization, and redaction. Knowledge-based policies are more general: they increase flexibility by not fixing the means to restrict information flow. We enforce a knowledge-based policy by explicitly tracking a model of a querier's belief about secret data, represented as a probability distribution, and denying any query that could increase knowledge above a given threshold. We implement query analysis and belief tracking via abstract interpretation using a novel probabilistic polyhedral domain, whose design permits trading off precision with performance while ensuring estimates of a querier's knowledge are sound. Experiments with our implementation show that several useful queries can be handled efficiently, and performance scales far better than would more standard implementations of probabilistic computation based on sampling.
{"title":"Dynamic Enforcement of Knowledge-Based Security Policies","authors":"Piotr (Peter) Mardziel, Stephen Magill, M. Hicks, M. Srivatsa","doi":"10.1109/CSF.2011.15","DOIUrl":"https://doi.org/10.1109/CSF.2011.15","url":null,"abstract":"This paper explores the idea of knowledge-based security policies, which are used to decide whether to answer queries over secret data based on an estimation of the querier's (possibly increased) knowledge given the results. Limiting knowledge is the goal of existing information release policies that employ mechanisms such as noising, anonymization, and redaction. Knowledge-based policies are more general: they increase flexibility by not fixing the means to restrict information flow. We enforce a knowledge-based policy by explicitly tracking a model of a querier's belief about secret data, represented as a probability distribution, and denying any query that could increase knowledge above a given threshold. We implement query analysis and belief tracking via abstract interpretation using a novel probabilistic polyhedral domain, whose design permits trading off precision with performance while ensuring estimates of a querier's knowledge are sound. Experiments with our implementation show that several useful queries can be handled efficiently, and performance scales far better than would more standard implementations of probabilistic computation based on sampling.","PeriodicalId":364995,"journal":{"name":"2011 IEEE 24th Computer Security Foundations Symposium","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129133476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Randomization is used in computer security as a tool to introduce unpredictability into the software infrastructure. In this paper, we study the use of randomization to achieve the secrecy and integrity guarantees for local memory. We follow the approach set out by Abadi and Plot kin. We consider the execution of an idealized language in two environments. In the strict environment, opponents cannot access local variables of the user program. In the lax environment, opponents may attempt to guess allocated memory locations and thus, with small probability, gain access the local memory of the user program. We model these environments using two novel calculi: lambda-mu-hashref and lambda-mu-proberef. Our contribution to the Abadi-Plot kin program is to enrich the programming language with dynamic memory allocation, first class and higher order references and call/cc-style control. On the one hand, these enhancements allow us to directly model a larger class of system hardening principles. On the other hand, the class of opponents is also enhanced since our enriched language permits natural and direct encoding of attacks that alter the control flow of programs. Our main technical result is a fully abstract translation (up to probability) of lambda-mu-hashref into lambda-mu-proberef. Thus, in the presence of randomized layouts, the opponent gains no new power from being able to guess local references of the user program. Our numerical bounds are similar to those of Abadi and Plot kin, thus, the extra programming language features do not cause a concomitant increase in the resources required for protection via randomization.
{"title":"Local Memory via Layout Randomization","authors":"R. Jagadeesan, Corin Pitcher, J. Rathke, J. Riely","doi":"10.1109/CSF.2011.18","DOIUrl":"https://doi.org/10.1109/CSF.2011.18","url":null,"abstract":"Randomization is used in computer security as a tool to introduce unpredictability into the software infrastructure. In this paper, we study the use of randomization to achieve the secrecy and integrity guarantees for local memory. We follow the approach set out by Abadi and Plot kin. We consider the execution of an idealized language in two environments. In the strict environment, opponents cannot access local variables of the user program. In the lax environment, opponents may attempt to guess allocated memory locations and thus, with small probability, gain access the local memory of the user program. We model these environments using two novel calculi: lambda-mu-hashref and lambda-mu-proberef. Our contribution to the Abadi-Plot kin program is to enrich the programming language with dynamic memory allocation, first class and higher order references and call/cc-style control. On the one hand, these enhancements allow us to directly model a larger class of system hardening principles. On the other hand, the class of opponents is also enhanced since our enriched language permits natural and direct encoding of attacks that alter the control flow of programs. Our main technical result is a fully abstract translation (up to probability) of lambda-mu-hashref into lambda-mu-proberef. Thus, in the presence of randomized layouts, the opponent gains no new power from being able to guess local references of the user program. Our numerical bounds are similar to those of Abadi and Plot kin, thus, the extra programming language features do not cause a concomitant increase in the resources required for protection via randomization.","PeriodicalId":364995,"journal":{"name":"2011 IEEE 24th Computer Security Foundations Symposium","volume":"100 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115545638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present StatVerif, which is an extension the ProVerif process calculus with constructs for explicit state, in order to be able to reason about protocols that manipulate global state. Global state is required by protocols used in hardware devices (such as smart cards and the TPM), as well as by protocols involving databases that store persistent information. We provide the operational semantics of StatVerif. We extend the ProVerif compiler to a compiler for StatVerif: it takes processes written in the extended process language, and produces Horn clauses. Our compilation is carefully engineered to avoid many false attacks. We prove the correctness of the StatVerif compiler. We illustrate our method on two examples: a small hardware security device, and a contract signing protocol. We are able to prove their desired properties automatically.
{"title":"StatVerif: Verification of Stateful Processes","authors":"Myrto Arapinis, J. Phillips, Eike Ritter, M. Ryan","doi":"10.1109/CSF.2011.10","DOIUrl":"https://doi.org/10.1109/CSF.2011.10","url":null,"abstract":"We present StatVerif, which is an extension the ProVerif process calculus with constructs for explicit state, in order to be able to reason about protocols that manipulate global state. Global state is required by protocols used in hardware devices (such as smart cards and the TPM), as well as by protocols involving databases that store persistent information. We provide the operational semantics of StatVerif. We extend the ProVerif compiler to a compiler for StatVerif: it takes processes written in the extended process language, and produces Horn clauses. Our compilation is carefully engineered to avoid many false attacks. We prove the correctness of the StatVerif compiler. We illustrate our method on two examples: a small hardware security device, and a contract signing protocol. We are able to prove their desired properties automatically.","PeriodicalId":364995,"journal":{"name":"2011 IEEE 24th Computer Security Foundations Symposium","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129163227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cole Schlesinger, K. Pattabiraman, N. Swamy, D. Walker, B. Zorn
This paper introduces Yarra, a conservative extension to C to protect applications from non-control data attacks. Yarra programmers specify their data integrity requirements by declaring critical data types and ascribing these critical types to important data structures. Yarra guarantees that such critical data is only written through pointers with the given static type. Any attempt to write to critical data through a pointer with an invalid type (perhaps because of a buffer overrun) is detected dynamically. We formalize Yarra's semantics and prove the soundness of a program logic designed for use with the language. A key contribution is to show that Yarra's semantics are strong enough to support sound local reasoning and the use of a frame rule, even across calls to unknown, unverified code. We evaluate a prototype implementation of a compiler and runtime system for Yarra by using it to harden four common server applications against known non-control data vulnerabilities. We show that Yarra defends against these attacks with only a negligible impact on their end-to-end performance.
{"title":"Modular Protections against Non-control Data Attacks","authors":"Cole Schlesinger, K. Pattabiraman, N. Swamy, D. Walker, B. Zorn","doi":"10.3233/JCS-140502","DOIUrl":"https://doi.org/10.3233/JCS-140502","url":null,"abstract":"This paper introduces Yarra, a conservative extension to C to protect applications from non-control data attacks. Yarra programmers specify their data integrity requirements by declaring critical data types and ascribing these critical types to important data structures. Yarra guarantees that such critical data is only written through pointers with the given static type. Any attempt to write to critical data through a pointer with an invalid type (perhaps because of a buffer overrun) is detected dynamically. We formalize Yarra's semantics and prove the soundness of a program logic designed for use with the language. A key contribution is to show that Yarra's semantics are strong enough to support sound local reasoning and the use of a frame rule, even across calls to unknown, unverified code. We evaluate a prototype implementation of a compiler and runtime system for Yarra by using it to harden four common server applications against known non-control data vulnerabilities. We show that Yarra defends against these attacks with only a negligible impact on their end-to-end performance.","PeriodicalId":364995,"journal":{"name":"2011 IEEE 24th Computer Security Foundations Symposium","volume":"142 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128587936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}