SIAM Journal on Control and Optimization, Volume 62, Issue 3, Page 1809-1831, June 2024. Abstract. In this paper, we consider the problem of placing a minimal number of controls to achieve controllability for a class of networked control systems that are based on the original Turing reaction-diffusion model, which is governed by a set of ordinary differential equations with interactions defined by a ring graph. Turing model considers two morphogens reacting and diffusing over the spatial domain and has been widely accepted as one of the most fundamental models to explain pattern formation in a developing embryo. It is of great importance to understand the mechanism behind the various reaction kinetics that generate such a wide range of patterns. As a first step towards this goal, in this paper we study controllability of Turing model for the case of cells connected as a square grid in which controls can be applied to the boundary cells. We first investigate the minimal control placement problem for the diffusion only system. The eigenvalues of the diffusion matrix are classified by their geometric multiplicity, and the properties of the corresponding eigenspaces are studied. The symmetric control sets are designed to categorize control candidates by symmetry of the network topology. Then the necessary and sufficient condition is provided for placing the minimal control to guarantee controllability for the diffusion system. Furthermore, we show that the necessary condition can be extended to Turing model by a natural expansion of the symmetric control sets. Under certain circumstances, we prove that it is also sufficient to ensure controllability of Turing model.
{"title":"Minimal Control Placement of Networked Reaction-Diffusion Systems Based on Turing Model","authors":"Yuexin Cao, Yibei Li, Lirong Zheng, Xiaoming Hu","doi":"10.1137/23m1616856","DOIUrl":"https://doi.org/10.1137/23m1616856","url":null,"abstract":"SIAM Journal on Control and Optimization, Volume 62, Issue 3, Page 1809-1831, June 2024. <br/> Abstract. In this paper, we consider the problem of placing a minimal number of controls to achieve controllability for a class of networked control systems that are based on the original Turing reaction-diffusion model, which is governed by a set of ordinary differential equations with interactions defined by a ring graph. Turing model considers two morphogens reacting and diffusing over the spatial domain and has been widely accepted as one of the most fundamental models to explain pattern formation in a developing embryo. It is of great importance to understand the mechanism behind the various reaction kinetics that generate such a wide range of patterns. As a first step towards this goal, in this paper we study controllability of Turing model for the case of cells connected as a square grid in which controls can be applied to the boundary cells. We first investigate the minimal control placement problem for the diffusion only system. The eigenvalues of the diffusion matrix are classified by their geometric multiplicity, and the properties of the corresponding eigenspaces are studied. The symmetric control sets are designed to categorize control candidates by symmetry of the network topology. Then the necessary and sufficient condition is provided for placing the minimal control to guarantee controllability for the diffusion system. Furthermore, we show that the necessary condition can be extended to Turing model by a natural expansion of the symmetric control sets. Under certain circumstances, we prove that it is also sufficient to ensure controllability of Turing model.","PeriodicalId":49531,"journal":{"name":"SIAM Journal on Control and Optimization","volume":"23 1","pages":""},"PeriodicalIF":2.2,"publicationDate":"2024-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141507890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
SIAM Journal on Control and Optimization, Volume 62, Issue 3, Page 1766-1782, June 2024. Abstract. We study the boundary controllability of [math] system of heat equations by using a flatness approach. According to the relation between the diffusion coefficients of the heat equation, it is known that the system can be neither null-controllable nor null-controllable for any [math], where [math]. Here we recover this result in the case that [math] by using the flatness method, and we obtain an explicit formula for the control and for the corresponding solutions. In particular, the state and the control have Gevrey regularity in time and in space.
{"title":"Flatness Approach for the Boundary Controllability of a System of Heat Equations","authors":"Blaise Colle, Jérôme Lohéac, Takéo Takahashi","doi":"10.1137/23m1577833","DOIUrl":"https://doi.org/10.1137/23m1577833","url":null,"abstract":"SIAM Journal on Control and Optimization, Volume 62, Issue 3, Page 1766-1782, June 2024. <br/> Abstract. We study the boundary controllability of [math] system of heat equations by using a flatness approach. According to the relation between the diffusion coefficients of the heat equation, it is known that the system can be neither null-controllable nor null-controllable for any [math], where [math]. Here we recover this result in the case that [math] by using the flatness method, and we obtain an explicit formula for the control and for the corresponding solutions. In particular, the state and the control have Gevrey regularity in time and in space.","PeriodicalId":49531,"journal":{"name":"SIAM Journal on Control and Optimization","volume":"42 1","pages":""},"PeriodicalIF":2.2,"publicationDate":"2024-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141523843","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
SIAM Journal on Control and Optimization, Volume 62, Issue 3, Page 1783-1808, June 2024. Abstract. This paper proposes criteria for establishing the asymptotic moment stability of semi-Markovian impulsive switched systems. Under some mild assumptions, we formulate an auxiliary linear time-delayed system based on the Lyapunov characterizations of the subsystems and impulses, as well as the properties of the underlying semi-Markovian impulsive switching signal. Our main result provides an upper bound on the moment, which is directly related to a solution of the aforementioned linear time-delayed system. Specifically, the semi-Markovian impulsive switched system is asymptotically moment stable if the auxiliary linear time-delayed system is asymptotically stable. In situations where the mode-dependent sojourn time distributions of the underlying impulsive switching signals are all exponential, uniform, or trigonometric, we deduce explicit formulae for the auxiliary linear time-delayed systems. To prove the main result, we compute the expected gain function, which requires formulating a generalized renewal equation. Finally, we test our stability criteria on a numerical example in different scenarios and show that our stability results are nonconservative compared to the statistically obtained average of state-norms and state-norm-squares.
{"title":"Nonconservative Stability Criteria for Semi-Markovian Impulsive Switched Systems","authors":"Shenyu Liu, Penghui Wen","doi":"10.1137/23m1564833","DOIUrl":"https://doi.org/10.1137/23m1564833","url":null,"abstract":"SIAM Journal on Control and Optimization, Volume 62, Issue 3, Page 1783-1808, June 2024. <br/> Abstract. This paper proposes criteria for establishing the asymptotic moment stability of semi-Markovian impulsive switched systems. Under some mild assumptions, we formulate an auxiliary linear time-delayed system based on the Lyapunov characterizations of the subsystems and impulses, as well as the properties of the underlying semi-Markovian impulsive switching signal. Our main result provides an upper bound on the moment, which is directly related to a solution of the aforementioned linear time-delayed system. Specifically, the semi-Markovian impulsive switched system is asymptotically moment stable if the auxiliary linear time-delayed system is asymptotically stable. In situations where the mode-dependent sojourn time distributions of the underlying impulsive switching signals are all exponential, uniform, or trigonometric, we deduce explicit formulae for the auxiliary linear time-delayed systems. To prove the main result, we compute the expected gain function, which requires formulating a generalized renewal equation. Finally, we test our stability criteria on a numerical example in different scenarios and show that our stability results are nonconservative compared to the statistically obtained average of state-norms and state-norm-squares.","PeriodicalId":49531,"journal":{"name":"SIAM Journal on Control and Optimization","volume":"27 1","pages":""},"PeriodicalIF":2.2,"publicationDate":"2024-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141508067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Philippe Bergault, Pierre Cardaliaguet, Catherine Rainer
SIAM Journal on Control and Optimization, Volume 62, Issue 3, Page 1737-1765, June 2024. Abstract. We investigate a stochastic differential game in which a major player has private information (the knowledge of a random variable), which she discloses through her control to a population of small players playing in a Nash mean field game equilibrium. The major player’s cost depends on the distribution of the population, while the cost of the population depends on the random variable known by the major player. We show that the game has a relaxed solution and that the optimal control of the major player is approximatively optimal in games with a large but finite number of small players.
{"title":"Mean Field Games in a Stackelberg Problem with an Informed Major Player","authors":"Philippe Bergault, Pierre Cardaliaguet, Catherine Rainer","doi":"10.1137/23m1615188","DOIUrl":"https://doi.org/10.1137/23m1615188","url":null,"abstract":"SIAM Journal on Control and Optimization, Volume 62, Issue 3, Page 1737-1765, June 2024. <br/> Abstract. We investigate a stochastic differential game in which a major player has private information (the knowledge of a random variable), which she discloses through her control to a population of small players playing in a Nash mean field game equilibrium. The major player’s cost depends on the distribution of the population, while the cost of the population depends on the random variable known by the major player. We show that the game has a relaxed solution and that the optimal control of the major player is approximatively optimal in games with a large but finite number of small players.","PeriodicalId":49531,"journal":{"name":"SIAM Journal on Control and Optimization","volume":"17 1","pages":""},"PeriodicalIF":2.2,"publicationDate":"2024-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141507891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Joel A. Rosenfeld, Benjamin P. Russo, Rushikesh Kamalapurkar, Taylor T. Johnson
SIAM Journal on Control and Optimization, Volume 62, Issue 3, Page 1643-1668, June 2024. Abstract. This manuscript presents a novel approach to nonlinear system identification leveraging densely defined Liouville operators and a new “kernel” function that represents an integration functional over a reproducing kernel Hilbert space (RKHS) dubbed an occupation kernel. The manuscript thoroughly explores the concept of occupation kernels in the contexts of RKHSs of continuous functions and establishes Liouville operators over RKHS, where several dense domains are found for specific examples of this unbounded operator. The combination of these two concepts allows for the embedding of a dynamical system into an RKHS, where function-theoretic tools may be leveraged for the examination of such systems. This framework allows for trajectories of a nonlinear dynamical system to be treated as a fundamental unit of data for a nonlinear system identification routine. The approach to nonlinear system identification is demonstrated to identify parameters of a dynamical system accurately while also exhibiting a certain robustness to noise.
{"title":"The Occupation Kernel Method for Nonlinear System Identification","authors":"Joel A. Rosenfeld, Benjamin P. Russo, Rushikesh Kamalapurkar, Taylor T. Johnson","doi":"10.1137/19m127029x","DOIUrl":"https://doi.org/10.1137/19m127029x","url":null,"abstract":"SIAM Journal on Control and Optimization, Volume 62, Issue 3, Page 1643-1668, June 2024. <br/> Abstract. This manuscript presents a novel approach to nonlinear system identification leveraging densely defined Liouville operators and a new “kernel” function that represents an integration functional over a reproducing kernel Hilbert space (RKHS) dubbed an occupation kernel. The manuscript thoroughly explores the concept of occupation kernels in the contexts of RKHSs of continuous functions and establishes Liouville operators over RKHS, where several dense domains are found for specific examples of this unbounded operator. The combination of these two concepts allows for the embedding of a dynamical system into an RKHS, where function-theoretic tools may be leveraged for the examination of such systems. This framework allows for trajectories of a nonlinear dynamical system to be treated as a fundamental unit of data for a nonlinear system identification routine. The approach to nonlinear system identification is demonstrated to identify parameters of a dynamical system accurately while also exhibiting a certain robustness to noise.","PeriodicalId":49531,"journal":{"name":"SIAM Journal on Control and Optimization","volume":"58 1","pages":""},"PeriodicalIF":2.2,"publicationDate":"2024-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141523845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
SIAM Journal on Control and Optimization, Volume 62, Issue 3, Page 1669-1689, June 2024. Abstract. We analyze the role of the bang-bang property in affine optimal control problems. We show that many essential stability properties of affine problems are only satisfied when minimizers are bang-bang. We employ Stegall’s variational principle to prove that almost any linear perturbation leads to a bang-bang strict global minimizer. Examples are given to show the applicability of our results to specific optimal control problems.
{"title":"Stability and Genericity of Bang-Bang Controls in Affine Problems","authors":"Alberto Domínguez Corella, Gerd Wachsmuth","doi":"10.1137/23m1586446","DOIUrl":"https://doi.org/10.1137/23m1586446","url":null,"abstract":"SIAM Journal on Control and Optimization, Volume 62, Issue 3, Page 1669-1689, June 2024. <br/> Abstract. We analyze the role of the bang-bang property in affine optimal control problems. We show that many essential stability properties of affine problems are only satisfied when minimizers are bang-bang. We employ Stegall’s variational principle to prove that almost any linear perturbation leads to a bang-bang strict global minimizer. Examples are given to show the applicability of our results to specific optimal control problems.","PeriodicalId":49531,"journal":{"name":"SIAM Journal on Control and Optimization","volume":"23 1","pages":""},"PeriodicalIF":2.2,"publicationDate":"2024-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141523844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
SIAM Journal on Control and Optimization, Volume 62, Issue 3, Page 1615-1642, June 2024. Abstract. We introduce the notion of mean viability for controlled stochastic differential equations and establish counterparts of Nagumo’s classical viability theorems (necessary and sufficient conditions for mean viability). As an application, we provide a purely probabilistic proof of a comparison principle and of existence for contingent and viscosity solutions of second-order fully nonlinear path-dependent Hamilton–Jacobi–Bellman equations. We do not use compactness and optimal stopping arguments, which are usually employed in the literature on viscosity solutions for second-order path-dependent PDEs.
{"title":"Mean Viability Theorems and Second-Order Hamilton–Jacobi Equations","authors":"Christian Keller","doi":"10.1137/23m1550438","DOIUrl":"https://doi.org/10.1137/23m1550438","url":null,"abstract":"SIAM Journal on Control and Optimization, Volume 62, Issue 3, Page 1615-1642, June 2024. <br/> Abstract. We introduce the notion of mean viability for controlled stochastic differential equations and establish counterparts of Nagumo’s classical viability theorems (necessary and sufficient conditions for mean viability). As an application, we provide a purely probabilistic proof of a comparison principle and of existence for contingent and viscosity solutions of second-order fully nonlinear path-dependent Hamilton–Jacobi–Bellman equations. We do not use compactness and optimal stopping arguments, which are usually employed in the literature on viscosity solutions for second-order path-dependent PDEs.","PeriodicalId":49531,"journal":{"name":"SIAM Journal on Control and Optimization","volume":"28 1","pages":""},"PeriodicalIF":2.2,"publicationDate":"2024-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141548536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
SIAM Journal on Control and Optimization, Volume 62, Issue 3, Page 1590-1614, June 2024. Abstract. In this paper, we study the optimal stopping problem in the so-called exploratory framework, in which the agent takes actions randomly conditioning on the current state and a regularization term is added to the reward functional. Such a transformation reduces the optimal stopping problem to a standard optimal control problem. For the American put option model, we derive the related HJB equation and prove its solvability. Furthermore, we give a convergence rate of policy iteration and compare our solution to the classical American put option problem. Our results indicate a trade-off between the convergence rate and bias in the choice of the temperature constant. Based on the theoretical analysis, a reinforcement learning algorithm is designed and numerical results are demonstrated for several models.
{"title":"Randomized Optimal Stopping Problem in Continuous Time and Reinforcement Learning Algorithm","authors":"Yuchao Dong","doi":"10.1137/22m1516725","DOIUrl":"https://doi.org/10.1137/22m1516725","url":null,"abstract":"SIAM Journal on Control and Optimization, Volume 62, Issue 3, Page 1590-1614, June 2024. <br/> Abstract. In this paper, we study the optimal stopping problem in the so-called exploratory framework, in which the agent takes actions randomly conditioning on the current state and a regularization term is added to the reward functional. Such a transformation reduces the optimal stopping problem to a standard optimal control problem. For the American put option model, we derive the related HJB equation and prove its solvability. Furthermore, we give a convergence rate of policy iteration and compare our solution to the classical American put option problem. Our results indicate a trade-off between the convergence rate and bias in the choice of the temperature constant. Based on the theoretical analysis, a reinforcement learning algorithm is designed and numerical results are demonstrated for several models.","PeriodicalId":49531,"journal":{"name":"SIAM Journal on Control and Optimization","volume":"38 1","pages":""},"PeriodicalIF":2.2,"publicationDate":"2024-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141523846","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
SIAM Journal on Control and Optimization, Volume 62, Issue 3, Page 1569-1589, June 2024. Abstract. The G-Brownian-motion-driven stochastic differential equations (G-SDEs) as well as the G-expectation, which were seminally proposed by Peng and his colleagues, have been extensively applied to describing a particular kind of uncertainty arising in real-world systems modeling. Mathematically depicting long-time and limit behaviors of the solution produced by G-SDEs is beneficial to understanding the mechanisms of system’s evolution. Here, we develop a new G-semimartingale convergence theorem and further establish a new invariance principle for investigating the long-time behaviors emergent in G-SDEs. We also validate the uniqueness and the global existence of the solution of G-SDEs whose vector fields are only locally Lipschitzian with a linear upper bound. To demonstrate the broad applicability of our analytically established results, we investigate its application to achieving G-stochastic control in a few representative dynamical systems.
{"title":"Invariance Principles for [math]-Brownian-Motion-Driven Stochastic Differential Equations and Their Applications to [math]-Stochastic Control","authors":"Xiaoxiao Peng, Shijie Zhou, Wei Lin, Xuerong Mao","doi":"10.1137/23m1564936","DOIUrl":"https://doi.org/10.1137/23m1564936","url":null,"abstract":"SIAM Journal on Control and Optimization, Volume 62, Issue 3, Page 1569-1589, June 2024. <br/> Abstract. The G-Brownian-motion-driven stochastic differential equations (G-SDEs) as well as the G-expectation, which were seminally proposed by Peng and his colleagues, have been extensively applied to describing a particular kind of uncertainty arising in real-world systems modeling. Mathematically depicting long-time and limit behaviors of the solution produced by G-SDEs is beneficial to understanding the mechanisms of system’s evolution. Here, we develop a new G-semimartingale convergence theorem and further establish a new invariance principle for investigating the long-time behaviors emergent in G-SDEs. We also validate the uniqueness and the global existence of the solution of G-SDEs whose vector fields are only locally Lipschitzian with a linear upper bound. To demonstrate the broad applicability of our analytically established results, we investigate its application to achieving G-stochastic control in a few representative dynamical systems.","PeriodicalId":49531,"journal":{"name":"SIAM Journal on Control and Optimization","volume":"82 1","pages":""},"PeriodicalIF":2.2,"publicationDate":"2024-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141191228","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Filippo De Feo, Salvatore Federico, Andrzej Święch
SIAM Journal on Control and Optimization, Volume 62, Issue 3, Page 1490-1520, June 2024. Abstract. In this manuscript we consider a class of optimal control problems of stochastic differential delay equations. First, we rewrite the problem in a suitable infinite-dimensional Hilbert space. Then, using the dynamic programming approach, we characterize the value function of the problem as the unique viscosity solution of the associated infinite-dimensional Hamilton-Jacobi-Bellman equation. Finally, we prove a [math]-partial regularity of the value function. We apply these results to path dependent financial and economic problems (Merton-like portfolio problem and optimal advertising).
{"title":"Optimal Control of Stochastic Delay Differential Equations and Applications to Path-Dependent Financial and Economic Models","authors":"Filippo De Feo, Salvatore Federico, Andrzej Święch","doi":"10.1137/23m1553960","DOIUrl":"https://doi.org/10.1137/23m1553960","url":null,"abstract":"SIAM Journal on Control and Optimization, Volume 62, Issue 3, Page 1490-1520, June 2024. <br/> Abstract. In this manuscript we consider a class of optimal control problems of stochastic differential delay equations. First, we rewrite the problem in a suitable infinite-dimensional Hilbert space. Then, using the dynamic programming approach, we characterize the value function of the problem as the unique viscosity solution of the associated infinite-dimensional Hamilton-Jacobi-Bellman equation. Finally, we prove a [math]-partial regularity of the value function. We apply these results to path dependent financial and economic problems (Merton-like portfolio problem and optimal advertising).","PeriodicalId":49531,"journal":{"name":"SIAM Journal on Control and Optimization","volume":"5 1","pages":""},"PeriodicalIF":2.2,"publicationDate":"2024-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141148201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}