The CertiKOS project at Yale aims to develop new language-based technologies for building large-scale certified system software. Initially, we thought that verifying an OS kernel would require new program logics and powerful proof automation tools, but it should not be much different from standard Hoare-style program verification. After several years of trials and errors, we have decided to take a different path from the one we originally planned. We now believe that building large-scale certified system software requires a fundamental shift in the way we design the underlying programming languages, program logics, and proof assistants. In this talk, I outline our new clean-slate approach, explain its rationale, and describe various lessons and insights based on our experience with the development of several new certified OS kernels.
{"title":"Clean-Slate Development of Certified OS Kernels","authors":"Zhong Shao","doi":"10.1145/2676724.2693180","DOIUrl":"https://doi.org/10.1145/2676724.2693180","url":null,"abstract":"The CertiKOS project at Yale aims to develop new language-based technologies for building large-scale certified system software. Initially, we thought that verifying an OS kernel would require new program logics and powerful proof automation tools, but it should not be much different from standard Hoare-style program verification. After several years of trials and errors, we have decided to take a different path from the one we originally planned. We now believe that building large-scale certified system software requires a fundamental shift in the way we design the underlying programming languages, program logics, and proof assistants. In this talk, I outline our new clean-slate approach, explain its rationale, and describe various lessons and insights based on our experience with the development of several new certified OS kernels.","PeriodicalId":187702,"journal":{"name":"Proceedings of the 2015 Conference on Certified Programs and Proofs","volume":"260 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121974559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Many graph algorithms are based on depth-first search (DFS). The formalizations of such algorithms typically share many common ideas. In this paper, we summarize these ideas into a framework in Isabelle/HOL. Building on the Isabelle Refinement Framework, we provide support for a refinement based development of DFS based algorithms, from phrasing and proving correct the abstract algorithm, over choosing an adequate implementation style (e.g., recursive, tail-recursive), to creating an executable algorithm that uses efficient data structures. As a case study, we verify DFS based algorithms of different complexity, from a simple cyclicity checker, over a safety property model checker, to complex algorithms like nested DFS and Tarjan's SCC algorithm.
{"title":"A Framework for Verifying Depth-First Search Algorithms","authors":"P. Lammich, René Neumann","doi":"10.1145/2676724.2693165","DOIUrl":"https://doi.org/10.1145/2676724.2693165","url":null,"abstract":"Many graph algorithms are based on depth-first search (DFS). The formalizations of such algorithms typically share many common ideas. In this paper, we summarize these ideas into a framework in Isabelle/HOL. Building on the Isabelle Refinement Framework, we provide support for a refinement based development of DFS based algorithms, from phrasing and proving correct the abstract algorithm, over choosing an adequate implementation style (e.g., recursive, tail-recursive), to creating an executable algorithm that uses efficient data structures. As a case study, we verify DFS based algorithms of different complexity, from a simple cyclicity checker, over a safety property model checker, to complex algorithms like nested DFS and Tarjan's SCC algorithm.","PeriodicalId":187702,"journal":{"name":"Proceedings of the 2015 Conference on Certified Programs and Proofs","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114894940","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Every context-free grammar can be transformed into an equivalent one in the Chomsky normal form by a sequence of four transformations. In this work on formalization of language theory, we prove formally in the Agda dependently typed programming language that each of these transformations is correct in the sense of making progress toward normality and preserving the language of the given grammar. Also, we show that the right sequence of these transformations leads to a grammar in the Chomsky normal form (since each next transformation preserves the normality properties established by the previous ones) that accepts the same language as the given grammar. As we work in a constructive setting, soundness and completeness proofs are functions converting between parse trees in the normalized and original grammars.
{"title":"Certified Normalization of Context-Free Grammars","authors":"Denis Firsov, Tarmo Uustalu","doi":"10.1145/2676724.2693177","DOIUrl":"https://doi.org/10.1145/2676724.2693177","url":null,"abstract":"Every context-free grammar can be transformed into an equivalent one in the Chomsky normal form by a sequence of four transformations. In this work on formalization of language theory, we prove formally in the Agda dependently typed programming language that each of these transformations is correct in the sense of making progress toward normality and preserving the language of the given grammar. Also, we show that the right sequence of these transformations leads to a grammar in the Chomsky normal form (since each next transformation preserves the normality properties established by the previous ones) that accepts the same language as the given grammar. As we work in a constructive setting, soundness and completeness proofs are functions converting between parse trees in the normalized and original grammars.","PeriodicalId":187702,"journal":{"name":"Proceedings of the 2015 Conference on Certified Programs and Proofs","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130785731","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We consider a two-sorted algebra over de Bruijn terms and de Bruijn substitutions equipped with the constants and operations from Abadi et al.'s sigma-calculus. We consider expressions with term variables and substitution variables and show that the semantic equivalence obtained with the algebra coincides with the axiomatic equivalence obtained with finitely many axioms based on the sigma-calculus. We prove this result with an informative decision algorithm for axiomatic equivalence, which in the negative case returns a variable assignment separating the given expressions in the algebra. The entire development is formalized in Coq.
{"title":"Completeness and Decidability of de Bruijn Substitution Algebra in Coq","authors":"S. Schäfer, G. Smolka, Tobias Tebbi","doi":"10.1145/2676724.2693163","DOIUrl":"https://doi.org/10.1145/2676724.2693163","url":null,"abstract":"We consider a two-sorted algebra over de Bruijn terms and de Bruijn substitutions equipped with the constants and operations from Abadi et al.'s sigma-calculus. We consider expressions with term variables and substitution variables and show that the semantic equivalence obtained with the algebra coincides with the axiomatic equivalence obtained with finitely many axioms based on the sigma-calculus. We prove this result with an informative decision algorithm for axiomatic equivalence, which in the negative case returns a variable assignment separating the given expressions in the algebra. The entire development is formalized in Coq.","PeriodicalId":187702,"journal":{"name":"Proceedings of the 2015 Conference on Certified Programs and Proofs","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123820874","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Proof automation is essential for large scale proof development such as OS kernel verification. An effective approach is to develop tactics and SMT solvers to automatically prove verification conditions. However, for complex systems, it is almost impossible to achieve fully automated verification and human interactions are unavoidable. So the key challenge here is, on the one hand, to reduce manual proofs as much as possible, and on the other hand, to provide user-friendly error messages when the automated verification fails, so that users could adjust specifications or the code accordingly, or to do part of the proofs manually. In this paper we propose a set of practical tactics for verifying C programs in Coq, including both tactics for automatically proving separation logic assertions and ones for automatic verification condition generation. In particular, we develop special tactics for verifying programs manipulating singly-linked lists. Using our tactics we are able to verify several C programs with one-line proof script. Another key feature of our tactics is that, if the tactics fail, they allow users to easily locate problems causing the failure by looking into the remaining subgoals, which greatly improves the usability when human interaction is necessary.
{"title":"Practical Tactics for Verifying C Programs in Coq","authors":"Jingyuan Cao, Ming Fu, Xinyu Feng","doi":"10.1145/2676724.2693162","DOIUrl":"https://doi.org/10.1145/2676724.2693162","url":null,"abstract":"Proof automation is essential for large scale proof development such as OS kernel verification. An effective approach is to develop tactics and SMT solvers to automatically prove verification conditions. However, for complex systems, it is almost impossible to achieve fully automated verification and human interactions are unavoidable. So the key challenge here is, on the one hand, to reduce manual proofs as much as possible, and on the other hand, to provide user-friendly error messages when the automated verification fails, so that users could adjust specifications or the code accordingly, or to do part of the proofs manually. In this paper we propose a set of practical tactics for verifying C programs in Coq, including both tactics for automatically proving separation logic assertions and ones for automatic verification condition generation. In particular, we develop special tactics for verifying programs manipulating singly-linked lists. Using our tactics we are able to verify several C programs with one-line proof script. Another key feature of our tactics is that, if the tactics fail, they allow users to easily locate problems causing the failure by looking into the remaining subgoals, which greatly improves the usability when human interaction is necessary.","PeriodicalId":187702,"journal":{"name":"Proceedings of the 2015 Conference on Certified Programs and Proofs","volume":"265 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122749107","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
It is our great pleasure to welcome you to CPP 2015, the fourth ACM SIGPLAN conference on Certified Proofs and Programs. The CPP series of meetings aims to cover those topics in computer science and mathematics in which certification via formal techniques is crucial. Topics of interest range from interactive and automated theorem proving to program proof to the mechanization of mathematics, with the production of independently-checkable certificates as a recurring theme. A manifesto for CPP, written by Jean-Pierre Jouannaud and Zhong Shao, can be found at http://cpp2015.inria.fr/manifesto.html. The first three editions of CPP were held in December 2011 in Taipei (Taiwan), in December 2012 in Kyoto (Japan); and in December 2013 in Melbourne (Australia), all three co-located with APLAS, the Asian Symposium on Programming Languages and Systems. This year, for the first time, CPP is sponsored by ACM SIGPLAN and is co-located with POPL'15, the 42nd ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, in Mumbai, India. We are deeply grateful to ACM SIGPLAN for sponsoring CPP'15, and to the POPL'15 general chair and local organizers for hosting CPP'15. We were pleased that Zhong Shao (Yale University) and Viktor Vafeiadis (MPI SWS) accepted our invitation to be invited speakers for CPP'15. Abstracts of their presentations are included in the proceedings. The program committee for CPP'15 was composed of 19 researchers from 12 countries. In response to the call for papers, we received a total of 26 submissions and accepted 18 papers for presentation and inclusion in the proceedings. Every submission was reviewed by at least four program committee members and their selected subreviewers. The electronic PC meeting was conducted with the help of the Easychair conference management system.
{"title":"Proceedings of the 2015 Conference on Certified Programs and Proofs","authors":"X. Leroy, Alwen Tiu","doi":"10.1145/2676724","DOIUrl":"https://doi.org/10.1145/2676724","url":null,"abstract":"It is our great pleasure to welcome you to CPP 2015, the fourth ACM SIGPLAN conference on Certified Proofs and Programs. The CPP series of meetings aims to cover those topics in computer science and mathematics in which certification via formal techniques is crucial. Topics of interest range from interactive and automated theorem proving to program proof to the mechanization of mathematics, with the production of independently-checkable certificates as a recurring theme. A manifesto for CPP, written by Jean-Pierre Jouannaud and Zhong Shao, can be found at http://cpp2015.inria.fr/manifesto.html. \u0000 \u0000The first three editions of CPP were held in December 2011 in Taipei (Taiwan), in December 2012 in Kyoto (Japan); and in December 2013 in Melbourne (Australia), all three co-located with APLAS, the Asian Symposium on Programming Languages and Systems. This year, for the first time, CPP is sponsored by ACM SIGPLAN and is co-located with POPL'15, the 42nd ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages, in Mumbai, India. We are deeply grateful to ACM SIGPLAN for sponsoring CPP'15, and to the POPL'15 general chair and local organizers for hosting CPP'15. \u0000 \u0000We were pleased that Zhong Shao (Yale University) and Viktor Vafeiadis (MPI SWS) accepted our invitation to be invited speakers for CPP'15. Abstracts of their presentations are included in the proceedings. \u0000 \u0000The program committee for CPP'15 was composed of 19 researchers from 12 countries. In response to the call for papers, we received a total of 26 submissions and accepted 18 papers for presentation and inclusion in the proceedings. Every submission was reviewed by at least four program committee members and their selected subreviewers. The electronic PC meeting was conducted with the help of the Easychair conference management system.","PeriodicalId":187702,"journal":{"name":"Proceedings of the 2015 Conference on Certified Programs and Proofs","volume":"37 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132537046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We describe two approaches for the computation of mathematical constant approximations inside interactive theorem provers. These two approaches share the same basis of fixed point computation and differ only in the way the proofs of correctness of the approximations are described. The first approach performs interval computations, while the second approach relies on bounding errors, for example with the help of derivatives. As an illustration, we show how to describe good approximations of the logarithm function and we compute -- to a precision of a million decimals inside the proof system, with a guarantee that all digits up to the millionth decimal are correct. All these experiments are performed with the Coq system, but most of the steps should apply to any interactive theorem prover.
{"title":"Fixed Precision Patterns for the Formal Verification of Mathematical Constant Approximations","authors":"Yves Bertot","doi":"10.1145/2676724.2693172","DOIUrl":"https://doi.org/10.1145/2676724.2693172","url":null,"abstract":"We describe two approaches for the computation of mathematical constant approximations inside interactive theorem provers. These two approaches share the same basis of fixed point computation and differ only in the way the proofs of correctness of the approximations are described. The first approach performs interval computations, while the second approach relies on bounding errors, for example with the help of derivatives. As an illustration, we show how to describe good approximations of the logarithm function and we compute -- to a precision of a million decimals inside the proof system, with a guarantee that all digits up to the millionth decimal are correct. All these experiments are performed with the Coq system, but most of the steps should apply to any interactive theorem prover.","PeriodicalId":187702,"journal":{"name":"Proceedings of the 2015 Conference on Certified Programs and Proofs","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129445073","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lock-freedom is a liveness property satisfied by most non-blocking concurrent algorithms. It ensures that at any point at least one thread is making progress towards termination; so the system as a whole makes progress. As a global property, lock-freedom is typically shown by global proofs or complex iterated arguments. We show that this complexity is not needed in practice. By introducing simple loop depth counters into the programs, we can reduce proving lock-freedom to checking simple local properties on those counters. We have implemented the approach in Cave and report on our findings.
{"title":"Proving Lock-Freedom Easily and Automatically","authors":"Xiao Jia, Wei Li, Viktor Vafeiadis","doi":"10.1145/2676724.2693179","DOIUrl":"https://doi.org/10.1145/2676724.2693179","url":null,"abstract":"Lock-freedom is a liveness property satisfied by most non-blocking concurrent algorithms. It ensures that at any point at least one thread is making progress towards termination; so the system as a whole makes progress. As a global property, lock-freedom is typically shown by global proofs or complex iterated arguments. We show that this complexity is not needed in practice. By introducing simple loop depth counters into the programs, we can reduce proving lock-freedom to checking simple local properties on those counters. We have implemented the approach in Cave and report on our findings.","PeriodicalId":187702,"journal":{"name":"Proceedings of the 2015 Conference on Certified Programs and Proofs","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130035699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Learning-assisted automated reasoning has recently gained popularity among the users of Isabelle/HOL, HOL Light, and Mizar. In this paper, we present an add-on to the HOL4 proof assistant and an adaptation of the HOL(y)Hammer system that provides machine learning-based premise selection and automated reasoning also for HOL4. We efficiently record the HOL4 dependencies and extract features from the theorem statements, which form a basis for premise selection. HOL(y)Hammer transforms the HOL4 statements in the various TPTP-ATP proof formats, which are then processed by the ATPs. We discuss the different evaluation settings: ATPs, accessible lemmas, and premise numbers. We measure the performance of HOL(y)Hammer on the HOL4 standard library. The results are combined accordingly and compared with the HOL Light experiments, showing a comparably high quality of predictions. The system directly benefits HOL4 users by automatically finding proofs dependencies that can be reconstructed by Metis.
{"title":"Premise Selection and External Provers for HOL4","authors":"Thibault Gauthier, C. Kaliszyk","doi":"10.1145/2676724.2693173","DOIUrl":"https://doi.org/10.1145/2676724.2693173","url":null,"abstract":"Learning-assisted automated reasoning has recently gained popularity among the users of Isabelle/HOL, HOL Light, and Mizar. In this paper, we present an add-on to the HOL4 proof assistant and an adaptation of the HOL(y)Hammer system that provides machine learning-based premise selection and automated reasoning also for HOL4. We efficiently record the HOL4 dependencies and extract features from the theorem statements, which form a basis for premise selection. HOL(y)Hammer transforms the HOL4 statements in the various TPTP-ATP proof formats, which are then processed by the ATPs. We discuss the different evaluation settings: ATPs, accessible lemmas, and premise numbers. We measure the performance of HOL(y)Hammer on the HOL4 standard library. The results are combined accordingly and compared with the HOL Light experiments, showing a comparably high quality of predictions. The system directly benefits HOL4 users by automatically finding proofs dependencies that can be reconstructed by Metis.","PeriodicalId":187702,"journal":{"name":"Proceedings of the 2015 Conference on Certified Programs and Proofs","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134011677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bisimilarity of two processes is formally established by producing a bisimulation relation that contains those two processes and obeys certain closure properties. In many situations, particularly when the underlying labeled transition system is unbounded, these bisimulation relations can be large and even infinite. The bisimulation-up-to technique has been developed to reduce the size of the relations being computed while retaining soundness, that is, the guarantee of the existence of a bisimulation. Such techniques are increasingly becoming a critical ingredient in the automated checking of bisimilarity. This paper is devoted to the formalization of the meta theory of several major bisimulation-up-to techniques for the process calculi CCS and the π-calculus (with replication). Our formalization is based on recent work on the proof theory of least and greatest fixpoints, particularly the use of relations defined (co-)inductively, and of co-inductive proofs about such relations, as implemented in the Abella theorem prover. An important feature of our formalization is that our definitions of the bisimulation-up-to relations are, in most cases, straightforward translations of published informal definitions, and our proofs clarify several technical details of the informal descriptions. Since the logic behind Abella also supports λ-tree syntax and generic reasoning using the ∇-quantifier, our treatment of the λ-calculus is both direct and natural.
{"title":"A Lightweight Formalization of the Metatheory of Bisimulation-Up-To","authors":"Kaustuv Chaudhuri, M. Cimini, D. Miller","doi":"10.1145/2676724.2693170","DOIUrl":"https://doi.org/10.1145/2676724.2693170","url":null,"abstract":"Bisimilarity of two processes is formally established by producing a bisimulation relation that contains those two processes and obeys certain closure properties. In many situations, particularly when the underlying labeled transition system is unbounded, these bisimulation relations can be large and even infinite. The bisimulation-up-to technique has been developed to reduce the size of the relations being computed while retaining soundness, that is, the guarantee of the existence of a bisimulation. Such techniques are increasingly becoming a critical ingredient in the automated checking of bisimilarity. This paper is devoted to the formalization of the meta theory of several major bisimulation-up-to techniques for the process calculi CCS and the π-calculus (with replication). Our formalization is based on recent work on the proof theory of least and greatest fixpoints, particularly the use of relations defined (co-)inductively, and of co-inductive proofs about such relations, as implemented in the Abella theorem prover. An important feature of our formalization is that our definitions of the bisimulation-up-to relations are, in most cases, straightforward translations of published informal definitions, and our proofs clarify several technical details of the informal descriptions. Since the logic behind Abella also supports λ-tree syntax and generic reasoning using the ∇-quantifier, our treatment of the λ-calculus is both direct and natural.","PeriodicalId":187702,"journal":{"name":"Proceedings of the 2015 Conference on Certified Programs and Proofs","volume":"98 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124080803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}