Ralf Küsters, Johannes Müller, Enrico Scapin, Tomasz Truderung
Modern remote electronic voting systems, such as the prominent Helios system, are designed to provide vote privacy and verifiability, where, roughly speaking, the latter means that voters can make sure that their votes were actually counted. In this paper, we propose a new practical voting system called sElect (secure/simple elections). This system, which we implemented as a platform independent web-based application, is meant for low-risk elections and is designed to be particularly simple and lightweight in terms of its structure, the cryptography it uses, and the user experience. One of the unique features of sElect is that it supports fully automated verification, which does not require any user interaction and is triggered as soon as a voter looks at the election result. Despite its simplicity, we prove that this system provides a good level of privacy, verifiability, and accountability for low-risk elections.
{"title":"sElect: A Lightweight Verifiable Remote Voting System","authors":"Ralf Küsters, Johannes Müller, Enrico Scapin, Tomasz Truderung","doi":"10.1109/CSF.2016.31","DOIUrl":"https://doi.org/10.1109/CSF.2016.31","url":null,"abstract":"Modern remote electronic voting systems, such as the prominent Helios system, are designed to provide vote privacy and verifiability, where, roughly speaking, the latter means that voters can make sure that their votes were actually counted. In this paper, we propose a new practical voting system called sElect (secure/simple elections). This system, which we implemented as a platform independent web-based application, is meant for low-risk elections and is designed to be particularly simple and lightweight in terms of its structure, the cryptography it uses, and the user experience. One of the unique features of sElect is that it supports fully automated verification, which does not require any user interaction and is triggered as soon as a voter looks at the election result. Despite its simplicity, we prove that this system provides a good level of privacy, verifiability, and accountability for low-risk elections.","PeriodicalId":6500,"journal":{"name":"2016 IEEE 29th Computer Security Foundations Symposium (CSF)","volume":"99 1","pages":"341-354"},"PeriodicalIF":0.0,"publicationDate":"2016-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83331279","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Side-channel attacks recover confidential information from non-functional characteristics of computations, such as time or memory consumption. We describe a program analysis that uses symbolic execution to quantify the information that is leaked to an attacker who makes multiple side-channel measurements. The analysis also synthesizes the concrete public inputs (the "attack") that lead to maximum leakage, via a novel reduction to Max-SMT solving over the constraints collected with symbolic execution. Furthermore model counting and information-theoretic metrics are used to compute an attacker's remaining uncertainty about a secret after a certain number of side-channel measurements are made. We have implemented the analysis in the Symbolic PathFinder tool and applied it in the context of password checking and cryptographic functions, showing how to obtain tight bounds on information leakage under a small number of attack steps.
{"title":"Multi-run Side-Channel Analysis Using Symbolic Execution and Max-SMT","authors":"C. Pasareanu, Quoc-Sang Phan, P. Malacaria","doi":"10.1109/CSF.2016.34","DOIUrl":"https://doi.org/10.1109/CSF.2016.34","url":null,"abstract":"Side-channel attacks recover confidential information from non-functional characteristics of computations, such as time or memory consumption. We describe a program analysis that uses symbolic execution to quantify the information that is leaked to an attacker who makes multiple side-channel measurements. The analysis also synthesizes the concrete public inputs (the \"attack\") that lead to maximum leakage, via a novel reduction to Max-SMT solving over the constraints collected with symbolic execution. Furthermore model counting and information-theoretic metrics are used to compute an attacker's remaining uncertainty about a secret after a certain number of side-channel measurements are made. We have implemented the analysis in the Symbolic PathFinder tool and applied it in the context of password checking and cryptographic functions, showing how to obtain tight bounds on information leakage under a small number of attack steps.","PeriodicalId":6500,"journal":{"name":"2016 IEEE 29th Computer Security Foundations Symposium (CSF)","volume":"13 1","pages":"387-400"},"PeriodicalIF":0.0,"publicationDate":"2016-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80334036","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Alvim, K. Chatzikokolakis, Annabelle McIver, Carroll Morgan, C. Palamidessi, Geoffrey Smith
Quantitative information flow aims to assess and control the leakage of sensitive information by computer systems. A key insight in this area is that no single leakage measure is appropriate in all operational scenarios, as a result, many leakage measures have been proposed, with many different properties. To clarify this complex situation, this paper studies information leakage axiomatically, showing important dependencies among different axioms. It also establishes a completeness result about the g-leakage family, showing that any leakage measure satisfying certain intuitively-reasonable properties can be expressed as a g-leakage.
{"title":"Axioms for Information Leakage","authors":"M. Alvim, K. Chatzikokolakis, Annabelle McIver, Carroll Morgan, C. Palamidessi, Geoffrey Smith","doi":"10.1109/CSF.2016.13","DOIUrl":"https://doi.org/10.1109/CSF.2016.13","url":null,"abstract":"Quantitative information flow aims to assess and control the leakage of sensitive information by computer systems. A key insight in this area is that no single leakage measure is appropriate in all operational scenarios, as a result, many leakage measures have been proposed, with many different properties. To clarify this complex situation, this paper studies information leakage axiomatically, showing important dependencies among different axioms. It also establishes a completeness result about the g-leakage family, showing that any leakage measure satisfying certain intuitively-reasonable properties can be expressed as a g-leakage.","PeriodicalId":6500,"journal":{"name":"2016 IEEE 29th Computer Security Foundations Symposium (CSF)","volume":"143 1","pages":"77-92"},"PeriodicalIF":0.0,"publicationDate":"2016-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80298126","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Attack-defence trees are a powerful technique for formally evaluating attack-defence scenarios. They represent in an intuitive, graphical way the interaction between an attacker and a defender who compete in order to achieve conflicting objectives. We propose a novel framework for the formal analysis of quantitative properties of complex attack-defence scenarios, using an extension of attack-defence trees which models temporal ordering of actions and allows explicit dependencies in the strategies adopted by attackers and defenders. We adopt a game-theoretic approach, translating attack-defence trees to two-player stochastic games, and then employ probabilistic model checking techniques to formally analyse these models. This provides a means to both verify formally specified security properties of the attack-defence scenarios and, dually, to synthesise strategies for attackers or defenders which guarantee or optimise some quantitative property, such as the probability of a successful attack, the expected cost incurred, or some multi-objective trade-off between the two. We implement our approach, building upon the PRISM-games model checker, and apply it to a case study of an RFID goods management system.
{"title":"Quantitative Verification and Synthesis of Attack-Defence Scenarios","authors":"Zaruhi Aslanyan, F. Nielson, D. Parker","doi":"10.1109/CSF.2016.15","DOIUrl":"https://doi.org/10.1109/CSF.2016.15","url":null,"abstract":"Attack-defence trees are a powerful technique for formally evaluating attack-defence scenarios. They represent in an intuitive, graphical way the interaction between an attacker and a defender who compete in order to achieve conflicting objectives. We propose a novel framework for the formal analysis of quantitative properties of complex attack-defence scenarios, using an extension of attack-defence trees which models temporal ordering of actions and allows explicit dependencies in the strategies adopted by attackers and defenders. We adopt a game-theoretic approach, translating attack-defence trees to two-player stochastic games, and then employ probabilistic model checking techniques to formally analyse these models. This provides a means to both verify formally specified security properties of the attack-defence scenarios and, dually, to synthesise strategies for attackers or defenders which guarantee or optimise some quantitative property, such as the probability of a successful attack, the expected cost incurred, or some multi-objective trade-off between the two. We implement our approach, building upon the PRISM-games model checker, and apply it to a case study of an RFID goods management system.","PeriodicalId":6500,"journal":{"name":"2016 IEEE 29th Computer Security Foundations Symposium (CSF)","volume":"8 1","pages":"105-119"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74957572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
For many software components, it is useful and important to verify their security. This can be done by an analysis of the software itself, or by isolating the software behind a protection mechanism such as an operating system kernel (virtual-memory protection) or cryptographic authentication (don't accepted untrusted inputs). But the protection mechanisms themselves must then be verified not just for safety but for functional correctness. Several recent projects have demonstrated that formal, deductive functional-correctness verification is now possible for kernels, crypto, and compilers. Here I explain some of the modularity principles that make these verifications possible.
{"title":"Modular Verification for Computer Security","authors":"A. Appel","doi":"10.1109/CSF.2016.8","DOIUrl":"https://doi.org/10.1109/CSF.2016.8","url":null,"abstract":"For many software components, it is useful and important to verify their security. This can be done by an analysis of the software itself, or by isolating the software behind a protection mechanism such as an operating system kernel (virtual-memory protection) or cryptographic authentication (don't accepted untrusted inputs). But the protection mechanisms themselves must then be verified not just for safety but for functional correctness. Several recent projects have demonstrated that formal, deductive functional-correctness verification is now possible for kernels, crypto, and compilers. Here I explain some of the modularity principles that make these verifications possible.","PeriodicalId":6500,"journal":{"name":"2016 IEEE 29th Computer Security Foundations Symposium (CSF)","volume":"6 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80304260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Environmental noise (e.g. heat, ionized particles, etc.) causes transient faults in hardware, which lead to corruption of stored values. Mission-critical devices require such faults to be mitigated by fault-tolerance - a combination of techniques that aim at preserving the functional behaviour of a system despite the disruptive effects of transient faults. Fault-tolerance typically has a high deployment cost - special hardware might be required to implement it - and provides weak statistical guarantees. It is also based on the assumption that faults are rare. In this paper, we consider scenarios where security, rather than functional correctness, is the main asset to be protected. Our main contribution is a theory for expressing confidentiality of data in the presence of transient faults. We show that the natural probabilistic definition of security in the presence of faults can be captured by a possibilistic definition. Furthermore, the possibilistic definition is implied by a known bisimulation-based property, called Strong Security. We illustrate the utility of these results for a simple RISC architecture for which only the code memory and program counter are assumed fault-tolerant. We present a type-directed compilation scheme that produces RISC code from a higher-level language for which Strong Security holds - i.e. well-typed programs compile to RISC code which is secure despite transient faults. In contrast with fault-tolerance solutions, our technique assumes relatively little special hardware, gives formal guarantees, and works in the presence of an active attacker who aggressively targets parts of a system and induces faults precisely.
{"title":"Fault-Resilient Non-interference","authors":"F. Tedesco, David Sands, Alejandro Russo","doi":"10.1109/CSF.2016.35","DOIUrl":"https://doi.org/10.1109/CSF.2016.35","url":null,"abstract":"Environmental noise (e.g. heat, ionized particles, etc.) causes transient faults in hardware, which lead to corruption of stored values. Mission-critical devices require such faults to be mitigated by fault-tolerance - a combination of techniques that aim at preserving the functional behaviour of a system despite the disruptive effects of transient faults. Fault-tolerance typically has a high deployment cost - special hardware might be required to implement it - and provides weak statistical guarantees. It is also based on the assumption that faults are rare. In this paper, we consider scenarios where security, rather than functional correctness, is the main asset to be protected. Our main contribution is a theory for expressing confidentiality of data in the presence of transient faults. We show that the natural probabilistic definition of security in the presence of faults can be captured by a possibilistic definition. Furthermore, the possibilistic definition is implied by a known bisimulation-based property, called Strong Security. We illustrate the utility of these results for a simple RISC architecture for which only the code memory and program counter are assumed fault-tolerant. We present a type-directed compilation scheme that produces RISC code from a higher-level language for which Strong Security holds - i.e. well-typed programs compile to RISC code which is secure despite transient faults. In contrast with fault-tolerance solutions, our technique assumes relatively little special hardware, gives formal guarantees, and works in the presence of an active attacker who aggressively targets parts of a system and induces faults precisely.","PeriodicalId":6500,"journal":{"name":"2016 IEEE 29th Computer Security Foundations Symposium (CSF)","volume":"88 1","pages":"401-416"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79653587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Many security protocols involve humans, not machines, as endpoints. The differences are critical: humans are not only computationally weaker than machines, they are naive, careless, and gullible. In this paper, we provide a model for formalizing and reasoning about these inherent human limitations and their consequences. Specifically, we formalize models of fallible humans in security protocols as multiset rewrite theories. We show how the Tamarin tool can then be used to automatically analyze security protocols involving human errors. We provide case studies of authentication protocols that show how different protocol constructions and features differ in their effectiveness with respect to different kinds of fallible humans. This provides a starting point for a fine-grained classification of security protocols from a usable-security perspective.
{"title":"Modeling Human Errors in Security Protocols","authors":"D. Basin, S. Radomirovic, Lara Schmid","doi":"10.1109/CSF.2016.30","DOIUrl":"https://doi.org/10.1109/CSF.2016.30","url":null,"abstract":"Many security protocols involve humans, not machines, as endpoints. The differences are critical: humans are not only computationally weaker than machines, they are naive, careless, and gullible. In this paper, we provide a model for formalizing and reasoning about these inherent human limitations and their consequences. Specifically, we formalize models of fallible humans in security protocols as multiset rewrite theories. We show how the Tamarin tool can then be used to automatically analyze security protocols involving human errors. We provide case studies of authentication protocols that show how different protocol constructions and features differ in their effectiveness with respect to different kinds of fallible humans. This provides a starting point for a fine-grained classification of security protocols from a usable-security perspective.","PeriodicalId":6500,"journal":{"name":"2016 IEEE 29th Computer Security Foundations Symposium (CSF)","volume":"66 1","pages":"325-340"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90719239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Secure compilation studies compilers that generate target-level components that are as secure as their source-level counterparts. Full abstraction is the most widely-proven property when defining a secure compiler. A compiler is modular if it allows different components to be compiled independently and then to be linked together to form a whole program. Unfortunately, many existing fully-abstract compilers to untyped machine code are not modular. So, while fully-abstractly compiled components are secure from malicious attackers, if they are linked against each other the resulting component may become vulnerable to attacks. This paper studies how to devise modular, fully-abstract compilers. It first analyses the attacks arising when compiled programs are linked together, identifying security threats that are due to linking. Then, it defines a compiler from an object-based language with method calls and dynamic memory allocation to untyped assembly language extended with a memory isolation mechanism. The paper provides a proof sketch that the defined compiler is fully-abstract and modular, so its output can be linked together without introducing security violations.
{"title":"On Modular and Fully-Abstract Compilation","authors":"Marco Patrignani, Dominique Devriese, F. Piessens","doi":"10.1109/CSF.2016.9","DOIUrl":"https://doi.org/10.1109/CSF.2016.9","url":null,"abstract":"Secure compilation studies compilers that generate target-level components that are as secure as their source-level counterparts. Full abstraction is the most widely-proven property when defining a secure compiler. A compiler is modular if it allows different components to be compiled independently and then to be linked together to form a whole program. Unfortunately, many existing fully-abstract compilers to untyped machine code are not modular. So, while fully-abstractly compiled components are secure from malicious attackers, if they are linked against each other the resulting component may become vulnerable to attacks. This paper studies how to devise modular, fully-abstract compilers. It first analyses the attacks arising when compiled programs are linked together, identifying security threats that are due to linking. Then, it defines a compiler from an object-based language with method calls and dynamic memory allocation to untyped assembly language extended with a memory isolation mechanism. The paper provides a proof sketch that the defined compiler is fully-abstract and modular, so its output can be linked together without introducing security violations.","PeriodicalId":6500,"journal":{"name":"2016 IEEE 29th Computer Security Foundations Symposium (CSF)","volume":"47 1","pages":"17-30"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73524545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. As people get excited about the latest idea for "Big Data" and the "Internet of Things", computer people often shake our heads and say "It won't scale." Pessimism isn't always justified: we have been able to scale up quite a number of tasks, from connectivity through search to social media. But other applications are recalcitrant, from energy management to medical records. The conventional computer-science view is that scaling systems is about computational complexity; about whether the storage or communications required for a task grows more than linearly in the number of users. Over the past thirty years we've developed a pretty good theory of that, but we're learning that it's nowhere near enough. In this talk I present a complementary view, based on over thirty years' experience of security engineering, that the real limits to scale are usually elsewhere. Even where the data are manageable and the algorithms straightforward, things can fail because of the scaling properties of the social context, the economic model or the regulatory environment. This makes some automation projects much harder than they seem. When it comes to safety and privacy many of the attacks that are easy to do in the lab are rare in the wild, as they don't scale either. But others surprise us; no-one in the intelligence community anticipated a leak on the Snowden scale. In short, scaling is now a problem not of computer science but of systems engineering, economics, governance and much else. Conceiving problems too narrowly makes failure likely, while good engineering will require ever more awareness of context. The implications for research, education and policy bear some thought.
{"title":"Are the Real Limits to Scale a Matter of Science, or Engineering, or of Something Else? (Abstract only)","authors":"Ross J. Anderson","doi":"10.1109/CSF.2016.41","DOIUrl":"https://doi.org/10.1109/CSF.2016.41","url":null,"abstract":"Summary form only given. As people get excited about the latest idea for \"Big Data\" and the \"Internet of Things\", computer people often shake our heads and say \"It won't scale.\" Pessimism isn't always justified: we have been able to scale up quite a number of tasks, from connectivity through search to social media. But other applications are recalcitrant, from energy management to medical records. The conventional computer-science view is that scaling systems is about computational complexity; about whether the storage or communications required for a task grows more than linearly in the number of users. Over the past thirty years we've developed a pretty good theory of that, but we're learning that it's nowhere near enough. In this talk I present a complementary view, based on over thirty years' experience of security engineering, that the real limits to scale are usually elsewhere. Even where the data are manageable and the algorithms straightforward, things can fail because of the scaling properties of the social context, the economic model or the regulatory environment. This makes some automation projects much harder than they seem. When it comes to safety and privacy many of the attacks that are easy to do in the lab are rare in the wild, as they don't scale either. But others surprise us; no-one in the intelligence community anticipated a leak on the Snowden scale. In short, scaling is now a problem not of computer science but of systems engineering, economics, governance and much else. Conceiving problems too narrowly makes failure likely, while good engineering will require ever more awareness of context. The implications for research, education and policy bear some thought.","PeriodicalId":6500,"journal":{"name":"2016 IEEE 29th Computer Security Foundations Symposium (CSF)","volume":"144 1","pages":"16-16"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77941491","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we present a novel notion of compositional non-interference for component-based systems. Our specification mechanism for non-interference properties is based on equivalence relations, catering for a precise formalization of declassified information. It takes assumptions on the environment into consideration. We also present a new notion of non-interference for services provided by a component and prove that a component only providing non-interferent services is itself non-interferent. Using these properties, secure information flow in a component-based system can be proved by separately analyzing each of the services that are provided by the components. As a result, we gain modular, precise, and reusable information-flow specifications for component-based systems.
{"title":"Non-interference with What-Declassification in Component-Based Systems","authors":"Simon Greiner, Daniel Grahl","doi":"10.1109/CSF.2016.25","DOIUrl":"https://doi.org/10.1109/CSF.2016.25","url":null,"abstract":"In this paper, we present a novel notion of compositional non-interference for component-based systems. Our specification mechanism for non-interference properties is based on equivalence relations, catering for a precise formalization of declassified information. It takes assumptions on the environment into consideration. We also present a new notion of non-interference for services provided by a component and prove that a component only providing non-interferent services is itself non-interferent. Using these properties, secure information flow in a component-based system can be proved by separately analyzing each of the services that are provided by the components. As a result, we gain modular, precise, and reusable information-flow specifications for component-based systems.","PeriodicalId":6500,"journal":{"name":"2016 IEEE 29th Computer Security Foundations Symposium (CSF)","volume":"30 1","pages":"253-267"},"PeriodicalIF":0.0,"publicationDate":"2016-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83404073","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}