Since their appearance in the 1950s, computational models capable of performing probabilistic choices have received wide attention and are nowadays pervasive in almost every areas of computer science. Their development was also inextricably linked with inquiries about computation power and resource issues. Although most crucial notions in the field are well-known, the related terminology is sometimes imprecise or misleading. The present work aims to clarify the core features and main differences between machines and classes developed in relation to randomized computation. To do so, we compare the modern definitions with original ones, recalling the context in which they first appeared, and investigate the relations linking probabilistic and counting models.
{"title":"On Randomized Computational Models and Complexity Classes: a Historical Overview","authors":"Melissa Antonelli, Ugo Dal Lago, Paolo Pistone","doi":"arxiv-2409.11999","DOIUrl":"https://doi.org/arxiv-2409.11999","url":null,"abstract":"Since their appearance in the 1950s, computational models capable of\u0000performing probabilistic choices have received wide attention and are nowadays\u0000pervasive in almost every areas of computer science. Their development was also\u0000inextricably linked with inquiries about computation power and resource issues.\u0000Although most crucial notions in the field are well-known, the related\u0000terminology is sometimes imprecise or misleading. The present work aims to\u0000clarify the core features and main differences between machines and classes\u0000developed in relation to randomized computation. To do so, we compare the\u0000modern definitions with original ones, recalling the context in which they\u0000first appeared, and investigate the relations linking probabilistic and\u0000counting models.","PeriodicalId":501208,"journal":{"name":"arXiv - CS - Logic in Computer Science","volume":"20 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142267198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We introduce Clerical, a programming language for exact real-number computation that combines first-order imperative-style programming with a limit operator for computation of real numbers as limits of Cauchy sequences. We address the semidecidability of the linear ordering of the reals by incorporating nondeterministic guarded choice, through which decisions based on partial comparison operations on reals can be patched together to give total programs. The interplay between mutable state, nondeterminism, and computation of limits is controlled by the requirement that expressions computing limits and guards modify only local state. We devise a domain-theoretic denotational semantics that uses a variant of Plotkin powerdomain construction tailored to our specific version of nondeterminism. We formulate a Hoare-style specification logic, show that it is sound for the denotational semantics, and illustrate the setup by implementing and proving correct a program for computation of $pi$ as the least positive zero of $sin$. The modular character of Clerical allows us to compose the program from smaller parts, each of which is shown to be correct on its own. We provide a proof-of-concept OCaml implementation of Clerical, and formally verify parts of the development, notably the soundness of specification logic, in the Coq proof assistant.
{"title":"An Imperative Language for Verified Exact Real-Number Computation","authors":"Andrej Bauer, Sewon Park, Alex Simpson","doi":"arxiv-2409.11946","DOIUrl":"https://doi.org/arxiv-2409.11946","url":null,"abstract":"We introduce Clerical, a programming language for exact real-number\u0000computation that combines first-order imperative-style programming with a limit\u0000operator for computation of real numbers as limits of Cauchy sequences. We\u0000address the semidecidability of the linear ordering of the reals by\u0000incorporating nondeterministic guarded choice, through which decisions based on\u0000partial comparison operations on reals can be patched together to give total\u0000programs. The interplay between mutable state, nondeterminism, and computation\u0000of limits is controlled by the requirement that expressions computing limits\u0000and guards modify only local state. We devise a domain-theoretic denotational\u0000semantics that uses a variant of Plotkin powerdomain construction tailored to\u0000our specific version of nondeterminism. We formulate a Hoare-style\u0000specification logic, show that it is sound for the denotational semantics, and\u0000illustrate the setup by implementing and proving correct a program for\u0000computation of $pi$ as the least positive zero of $sin$. The modular\u0000character of Clerical allows us to compose the program from smaller parts, each\u0000of which is shown to be correct on its own. We provide a proof-of-concept OCaml\u0000implementation of Clerical, and formally verify parts of the development,\u0000notably the soundness of specification logic, in the Coq proof assistant.","PeriodicalId":501208,"journal":{"name":"arXiv - CS - Logic in Computer Science","volume":"52 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142267197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The $lambdamu$-calculus plays a central role in the theory of programming languages as it extends the Curry-Howard correspondence to classical logic. A major drawback is that it does not satisfy B"ohm's Theorem and it lacks the corresponding notion of approximation. On the contrary, we show that Ehrhard and Regnier's Taylor expansion can be easily adapted, thus providing a resource conscious approximation theory. This produces a sensible $lambdamu$-theory with which we prove some advanced properties of the $lambdamu$-calculus, such as Stability and Perpendicular Lines Property, from which the impossibility of parallel computations follows.
{"title":"Resource approximation for the $λμ$-calculus","authors":"Davide Barbarossa","doi":"arxiv-2409.11587","DOIUrl":"https://doi.org/arxiv-2409.11587","url":null,"abstract":"The $lambdamu$-calculus plays a central role in the theory of programming\u0000languages as it extends the Curry-Howard correspondence to classical logic. A\u0000major drawback is that it does not satisfy B\"ohm's Theorem and it lacks the\u0000corresponding notion of approximation. On the contrary, we show that Ehrhard\u0000and Regnier's Taylor expansion can be easily adapted, thus providing a resource\u0000conscious approximation theory. This produces a sensible $lambdamu$-theory\u0000with which we prove some advanced properties of the $lambdamu$-calculus, such\u0000as Stability and Perpendicular Lines Property, from which the impossibility of\u0000parallel computations follows.","PeriodicalId":501208,"journal":{"name":"arXiv - CS - Logic in Computer Science","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142269603","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We prove the Stability Property for the call-by-value $lambda$-calculus (CbV in the following). This result states necessary conditions under which the contexts of the CbV $lambda$-calculus commute with intersections of approximants. This is an important non-trivial result, which implies the sequentiality of the calculus. We prove it via the tool of Taylor-resource approximation, whose power has been shown in several recent papers. This technique is usually conceived for the ordinary $lambda$-calculus, but it can be easily defined for the CbV setting. Our proof is the adaptation of the one for the ordinary calculus using the same technique, with some minimal technical modification due to the fact that in the CbV setting one linearises terms in a slightly different way than usual (cfr. $!(Amultimap B)$ vs $!Amultimap B$). The content of this article is taken from the PhD thesis of the author.
我们证明了逐值调用$lambda$微积分(以下简称CbV)的稳定属性。这个结果指出了CbV $lambda$-calculus 的上下文与近似物的交集相通的必要条件。这是一个重要的非难结果,它意味着微积分的这些必要条件。我们通过泰勒资源逼近的工具来证明它,它的威力已在最近的几篇论文中显示出来。这个工具通常是为普通$lambda$微积分设计的,但它可以很容易地定义为CbV设置。我们的证明是用同样的技术对普通微积分的证明进行了调整,由于在 CbV 环境中,术语线性化的方式与通常略有不同(参见$!(Amultimap B)$ vs $!Amultimap B$),因此在技术上做了一些最小的修改。
{"title":"Stability Property for the Call-by-Value $λ$-calculus through Taylor Expansion","authors":"Davide Barbarossa","doi":"arxiv-2409.11572","DOIUrl":"https://doi.org/arxiv-2409.11572","url":null,"abstract":"We prove the Stability Property for the call-by-value $lambda$-calculus (CbV\u0000in the following). This result states necessary conditions under which the\u0000contexts of the CbV $lambda$-calculus commute with intersections of\u0000approximants. This is an important non-trivial result, which implies the\u0000sequentiality of the calculus. We prove it via the tool of Taylor-resource\u0000approximation, whose power has been shown in several recent papers. This\u0000technique is usually conceived for the ordinary $lambda$-calculus, but it can\u0000be easily defined for the CbV setting. Our proof is the adaptation of the one\u0000for the ordinary calculus using the same technique, with some minimal technical\u0000modification due to the fact that in the CbV setting one linearises terms in a\u0000slightly different way than usual (cfr. $!(Amultimap B)$ vs $!Amultimap B$).\u0000The content of this article is taken from the PhD thesis of the author.","PeriodicalId":501208,"journal":{"name":"arXiv - CS - Logic in Computer Science","volume":"210 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142269602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Preference Inference involves inferring additional user preferences from elicited or observed preferences, based on assumptions regarding the form of the user's preference relation. In this paper we consider a situation in which alternatives have an associated vector of costs, each component corresponding to a different criterion, and are compared using a kind of lexicographic order, similar to the way alternatives are compared in a Hierarchical Constraint Logic Programming model. It is assumed that the user has some (unknown) importance ordering on criteria, and that to compare two alternatives, firstly, the combined cost of each alternative with respect to the most important criteria are compared; only if these combined costs are equal, are the next most important criteria considered. The preference inference problem then consists of determining whether a preference statement can be inferred from a set of input preferences. We show that this problem is coNP-complete, even if one restricts the cardinality of the equal-importance sets to have at most two elements, and one only considers non-strict preferences. However, it is polynomial if it is assumed that the user's ordering of criteria is a total ordering; it is also polynomial if the sets of equally important criteria are all equivalence classes of a given fixed equivalence relation. We give an efficient polynomial algorithm for these cases, which also throws light on the structure of the inference.
{"title":"Computation and Complexity of Preference Inference Based on Hierarchical Models","authors":"Nic Wilson, Anne-Marie George, Barry O'Sullivan","doi":"arxiv-2409.11044","DOIUrl":"https://doi.org/arxiv-2409.11044","url":null,"abstract":"Preference Inference involves inferring additional user preferences from\u0000elicited or observed preferences, based on assumptions regarding the form of\u0000the user's preference relation. In this paper we consider a situation in which\u0000alternatives have an associated vector of costs, each component corresponding\u0000to a different criterion, and are compared using a kind of lexicographic order,\u0000similar to the way alternatives are compared in a Hierarchical Constraint Logic\u0000Programming model. It is assumed that the user has some (unknown) importance\u0000ordering on criteria, and that to compare two alternatives, firstly, the\u0000combined cost of each alternative with respect to the most important criteria\u0000are compared; only if these combined costs are equal, are the next most\u0000important criteria considered. The preference inference problem then consists\u0000of determining whether a preference statement can be inferred from a set of\u0000input preferences. We show that this problem is coNP-complete, even if one\u0000restricts the cardinality of the equal-importance sets to have at most two\u0000elements, and one only considers non-strict preferences. However, it is\u0000polynomial if it is assumed that the user's ordering of criteria is a total\u0000ordering; it is also polynomial if the sets of equally important criteria are\u0000all equivalence classes of a given fixed equivalence relation. We give an\u0000efficient polynomial algorithm for these cases, which also throws light on the\u0000structure of the inference.","PeriodicalId":501208,"journal":{"name":"arXiv - CS - Logic in Computer Science","volume":"102 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142267199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Counting the number of models of a Boolean formula is a fundamental problem in artificial intelligence and reasoning. Minimal models of a Boolean formula are critical in various reasoning systems, making the counting of minimal models essential for detailed inference tasks. Existing research primarily focused on decision problems related to minimal models. In this work, we extend beyond decision problems to address the challenge of counting minimal models. Specifically, we propose a novel knowledge compilation form that facilitates the efficient counting of minimal models. Our approach leverages the idea of justification and incorporates theories from answer set counting.
{"title":"Minimal Model Counting via Knowledge Compilation","authors":"Mohimenul Kabir","doi":"arxiv-2409.10170","DOIUrl":"https://doi.org/arxiv-2409.10170","url":null,"abstract":"Counting the number of models of a Boolean formula is a fundamental problem\u0000in artificial intelligence and reasoning. Minimal models of a Boolean formula\u0000are critical in various reasoning systems, making the counting of minimal\u0000models essential for detailed inference tasks. Existing research primarily\u0000focused on decision problems related to minimal models. In this work, we extend\u0000beyond decision problems to address the challenge of counting minimal models.\u0000Specifically, we propose a novel knowledge compilation form that facilitates\u0000the efficient counting of minimal models. Our approach leverages the idea of\u0000justification and incorporates theories from answer set counting.","PeriodicalId":501208,"journal":{"name":"arXiv - CS - Logic in Computer Science","volume":"32 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142267200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Antonio Ielo, Giuseppe Mazzotta, Rafael Peñaloza, Francesco Ricca
Linear Temporal Logic over finite traces ($text{LTL}_f$) is a widely used formalism with applications in AI, process mining, model checking, and more. The primary reasoning task for $text{LTL}_f$ is satisfiability checking; yet, the recent focus on explainable AI has increased interest in analyzing inconsistent formulas, making the enumeration of minimal explanations for infeasibility a relevant task also for $text{LTL}_f$. This paper introduces a novel technique for enumerating minimal unsatisfiable cores (MUCs) of an $text{LTL}_f$ specification. The main idea is to encode a $text{LTL}_f$ formula into an Answer Set Programming (ASP) specification, such that the minimal unsatisfiable subsets (MUSes) of the ASP program directly correspond to the MUCs of the original $text{LTL}_f$ specification. Leveraging recent advancements in ASP solving yields a MUC enumerator achieving good performance in experiments conducted on established benchmarks from the literature.
{"title":"Enumerating Minimal Unsatisfiable Cores of LTLf formulas","authors":"Antonio Ielo, Giuseppe Mazzotta, Rafael Peñaloza, Francesco Ricca","doi":"arxiv-2409.09485","DOIUrl":"https://doi.org/arxiv-2409.09485","url":null,"abstract":"Linear Temporal Logic over finite traces ($text{LTL}_f$) is a widely used\u0000formalism with applications in AI, process mining, model checking, and more.\u0000The primary reasoning task for $text{LTL}_f$ is satisfiability checking; yet,\u0000the recent focus on explainable AI has increased interest in analyzing\u0000inconsistent formulas, making the enumeration of minimal explanations for\u0000infeasibility a relevant task also for $text{LTL}_f$. This paper introduces a\u0000novel technique for enumerating minimal unsatisfiable cores (MUCs) of an\u0000$text{LTL}_f$ specification. The main idea is to encode a $text{LTL}_f$\u0000formula into an Answer Set Programming (ASP) specification, such that the\u0000minimal unsatisfiable subsets (MUSes) of the ASP program directly correspond to\u0000the MUCs of the original $text{LTL}_f$ specification. Leveraging recent\u0000advancements in ASP solving yields a MUC enumerator achieving good performance\u0000in experiments conducted on established benchmarks from the literature.","PeriodicalId":501208,"journal":{"name":"arXiv - CS - Logic in Computer Science","volume":"28 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142267202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Automata networks are a versatile model of finite discrete dynamical systems composed of interacting entities (the automata), able to embed any directed graph as a dynamics on its space of configurations (the set of vertices, representing all the assignments of a state to each entity). In this world, virtually any question is decidable by a simple exhaustive search. We lever the Rice-like complexity lower bound, stating that any non-trivial monadic second order logic question on the graph of its dynamics is NP-hard or coNP-hard (given the automata network description), to bounded alphabets (including the Boolean case). This restriction is particularly meaningful for applications to "complex systems", where each entity has a restricted set of possible states (its alphabet). For the non-deterministic case, trivial questions are solvable in constant time, hence there is a sharp gap in complexity for the algorithmic solving of concrete problems on them. For the non-deterministic case, non-triviality is defined at bounded treewidth, which offers a structure to establish metatheorems of complexity lower bounds.
{"title":"Rice-like complexity lower bounds for Boolean and uniform automata networks","authors":"Aliénor Goubault--Larrecq, Kévin Perrot","doi":"arxiv-2409.08762","DOIUrl":"https://doi.org/arxiv-2409.08762","url":null,"abstract":"Automata networks are a versatile model of finite discrete dynamical systems\u0000composed of interacting entities (the automata), able to embed any directed\u0000graph as a dynamics on its space of configurations (the set of vertices,\u0000representing all the assignments of a state to each entity). In this world,\u0000virtually any question is decidable by a simple exhaustive search. We lever the\u0000Rice-like complexity lower bound, stating that any non-trivial monadic second\u0000order logic question on the graph of its dynamics is NP-hard or coNP-hard\u0000(given the automata network description), to bounded alphabets (including the\u0000Boolean case). This restriction is particularly meaningful for applications to\u0000\"complex systems\", where each entity has a restricted set of possible states\u0000(its alphabet). For the non-deterministic case, trivial questions are solvable\u0000in constant time, hence there is a sharp gap in complexity for the algorithmic\u0000solving of concrete problems on them. For the non-deterministic case,\u0000non-triviality is defined at bounded treewidth, which offers a structure to\u0000establish metatheorems of complexity lower bounds.","PeriodicalId":501208,"journal":{"name":"arXiv - CS - Logic in Computer Science","volume":"28 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142267367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Despite the significant interest in extending the AGM paradigm of belief change beyond finitary logics, the computational aspects of AGM have remained almost untouched. We investigate the computability of AGM contraction on non-finitary logics, and show an intriguing negative result: there are infinitely many uncomputable AGM contraction functions in such logics. Drastically, even if we restrict the theories used to represent epistemic states, in all non-trivial cases, the uncomputability remains. On the positive side, we identify an infinite class of computable AGM contraction functions on Linear Temporal Logic (LTL). We use B"uchi automata to construct such functions as well as to represent and reason about LTL knowledge.
{"title":"The Challenges of Effective AGM Belief Contraction","authors":"Dominik Klumpp, Jandson S. Ribeiro","doi":"arxiv-2409.09171","DOIUrl":"https://doi.org/arxiv-2409.09171","url":null,"abstract":"Despite the significant interest in extending the AGM paradigm of belief\u0000change beyond finitary logics, the computational aspects of AGM have remained\u0000almost untouched. We investigate the computability of AGM contraction on\u0000non-finitary logics, and show an intriguing negative result: there are\u0000infinitely many uncomputable AGM contraction functions in such logics.\u0000Drastically, even if we restrict the theories used to represent epistemic\u0000states, in all non-trivial cases, the uncomputability remains. On the positive\u0000side, we identify an infinite class of computable AGM contraction functions on\u0000Linear Temporal Logic (LTL). We use B\"uchi automata to construct such\u0000functions as well as to represent and reason about LTL knowledge.","PeriodicalId":501208,"journal":{"name":"arXiv - CS - Logic in Computer Science","volume":"33 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142267201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The A-hierarchy is a parametric analogue of the polynomial hierarchy in the context of paramterised complexity theory. We give a new characterisation of the A-hierarchy in terms of a generalisation of the SUBSET-SUM problem.
A 层次结构是参数化复杂性理论背景下多项式层次结构的参数化类似物。我们从 SUBSET-SUM 问题的一般化角度给出了 A 层结构的新特征。
{"title":"A SUBSET-SUM Characterisation of the A-Hierarchy","authors":"Jan Gutleben, Arne Meier","doi":"arxiv-2409.07996","DOIUrl":"https://doi.org/arxiv-2409.07996","url":null,"abstract":"The A-hierarchy is a parametric analogue of the polynomial hierarchy in the\u0000context of paramterised complexity theory. We give a new characterisation of\u0000the A-hierarchy in terms of a generalisation of the SUBSET-SUM problem.","PeriodicalId":501208,"journal":{"name":"arXiv - CS - Logic in Computer Science","volume":"10 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142224945","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}