Ali Diab, Giorgio Valmorbida, William Pasillas-Lépine
SIAM Journal on Control and Optimization, Volume 62, Issue 2, Page 877-902, April 2024. Abstract. We study parameterizations of Lyapunov–Krasovskii functionals (LKFs) to analyze the stability of linear time-delay systems. We discuss the solution to the delay Lyapunov matrix, which constructs an LKF associated with a prescribed time derivative, and relate it to the approaches commonly used in the numerical computation of LKFs. We then compare two approaches for the stability analysis of time-delay systems based on semidefinite programming, namely the method based on integral inequalities and the method based on sum-of-squares programming, which have recently emerged as optimization-based methods to compute LKFs. We discuss their main assumptions and establish connections between both methods. Finally, we formulate a projection-based method allowing us to use general sets of functions to parameterize LKFs, thus encompassing the sets of polynomial functions in the literature. The solutions of the proposed stability conditions and the construction of the corresponding LKFs as stability certificates are illustrated with numerical examples.
{"title":"Verification Methods for the Lyapunov–Krasovskii Functional Inequalities","authors":"Ali Diab, Giorgio Valmorbida, William Pasillas-Lépine","doi":"10.1137/22m1542167","DOIUrl":"https://doi.org/10.1137/22m1542167","url":null,"abstract":"SIAM Journal on Control and Optimization, Volume 62, Issue 2, Page 877-902, April 2024. <br/> Abstract. We study parameterizations of Lyapunov–Krasovskii functionals (LKFs) to analyze the stability of linear time-delay systems. We discuss the solution to the delay Lyapunov matrix, which constructs an LKF associated with a prescribed time derivative, and relate it to the approaches commonly used in the numerical computation of LKFs. We then compare two approaches for the stability analysis of time-delay systems based on semidefinite programming, namely the method based on integral inequalities and the method based on sum-of-squares programming, which have recently emerged as optimization-based methods to compute LKFs. We discuss their main assumptions and establish connections between both methods. Finally, we formulate a projection-based method allowing us to use general sets of functions to parameterize LKFs, thus encompassing the sets of polynomial functions in the literature. The solutions of the proposed stability conditions and the construction of the corresponding LKFs as stability certificates are illustrated with numerical examples.","PeriodicalId":49531,"journal":{"name":"SIAM Journal on Control and Optimization","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2024-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140036886","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Aitor Balmaseda, Davide Lonigro, Juan Manuel Pérez-Pardo
SIAM Journal on Control and Optimization, Volume 62, Issue 2, Page 826-852, April 2024. Abstract. We study a system composed of a free quantum particle trapped in a box whose walls can change their position. We prove the global approximate controllability of the system: any initial pure state can be driven arbitrarily close to any target pure state in the Hilbert space of the free particle with a predetermined final position of the box. To this purpose we consider weak solutions of the Schrödinger equation and use a stability theorem for the time-dependent Schrödinger equation.
{"title":"On Global Approximate Controllability of a Quantum Particle in a Box by Moving Walls","authors":"Aitor Balmaseda, Davide Lonigro, Juan Manuel Pérez-Pardo","doi":"10.1137/22m1518980","DOIUrl":"https://doi.org/10.1137/22m1518980","url":null,"abstract":"SIAM Journal on Control and Optimization, Volume 62, Issue 2, Page 826-852, April 2024. <br/> Abstract. We study a system composed of a free quantum particle trapped in a box whose walls can change their position. We prove the global approximate controllability of the system: any initial pure state can be driven arbitrarily close to any target pure state in the Hilbert space of the free particle with a predetermined final position of the box. To this purpose we consider weak solutions of the Schrödinger equation and use a stability theorem for the time-dependent Schrödinger equation.","PeriodicalId":49531,"journal":{"name":"SIAM Journal on Control and Optimization","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2024-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140037374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
SIAM Journal on Control and Optimization, Volume 62, Issue 2, Page 853-876, April 2024. Abstract. In this paper, we investigate the effects of applying generalized (nonexponential) discounting on a long-run impulse control problem for a Feller–Markov process. We show that the optimal value of the discounted problem is the same as the optimal value of its undiscounted version. Next, we prove that an optimal strategy for the undiscounted discrete-time functional is also optimal for the discrete-time discounted criterion and nearly optimal for the continuous-time discounted one. This shows that the discounted problem, being time-inconsistent in nature, admits a time-consistent solution. Also, instead of a complex time-dependent Bellman equation, one may consider its simpler time-independent version.
{"title":"Long-Run Impulse Control with Generalized Discounting","authors":"Damian Jelito, Łukasz Stettner","doi":"10.1137/23m1582539","DOIUrl":"https://doi.org/10.1137/23m1582539","url":null,"abstract":"SIAM Journal on Control and Optimization, Volume 62, Issue 2, Page 853-876, April 2024. <br/> Abstract. In this paper, we investigate the effects of applying generalized (nonexponential) discounting on a long-run impulse control problem for a Feller–Markov process. We show that the optimal value of the discounted problem is the same as the optimal value of its undiscounted version. Next, we prove that an optimal strategy for the undiscounted discrete-time functional is also optimal for the discrete-time discounted criterion and nearly optimal for the continuous-time discounted one. This shows that the discounted problem, being time-inconsistent in nature, admits a time-consistent solution. Also, instead of a complex time-dependent Bellman equation, one may consider its simpler time-independent version.","PeriodicalId":49531,"journal":{"name":"SIAM Journal on Control and Optimization","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2024-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140036900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
SIAM Journal on Control and Optimization, Volume 62, Issue 2, Page 799-825, April 2024. Abstract. We consider a subclass of [math]-player stochastic games, in which players have their own internal state/action spaces while they are coupled through their payoff functions. It is assumed that players’ internal chains are driven by independent transition probabilities. Moreover, players can receive only realizations of their payoffs, not the actual functions, and cannot observe each others’ states/actions. For this class of games, we first show that finding a stationary Nash equilibrium (NE) policy without any assumption on the reward functions is intractable. However, for general reward functions, we develop polynomial-time learning algorithms based on dual averaging and dual mirror descent, which converge in terms of the averaged Nikaido–Isoda distance to the set of [math]-NE policies almost surely or in expectation. In particular, under extra assumptions on the reward functions such as social concavity, we derive polynomial upper bounds on the number of iterates to achieve an [math]-NE policy with high probability. Finally, we evaluate the effectiveness of the proposed algorithms in learning [math]-NE policies using numerical experiments for energy management in smart grids.
{"title":"Learning Stationary Nash Equilibrium Policies in [math]-Player Stochastic Games with Independent Chains","authors":"S. Rasoul Etesami","doi":"10.1137/22m1512880","DOIUrl":"https://doi.org/10.1137/22m1512880","url":null,"abstract":"SIAM Journal on Control and Optimization, Volume 62, Issue 2, Page 799-825, April 2024. <br/> Abstract. We consider a subclass of [math]-player stochastic games, in which players have their own internal state/action spaces while they are coupled through their payoff functions. It is assumed that players’ internal chains are driven by independent transition probabilities. Moreover, players can receive only realizations of their payoffs, not the actual functions, and cannot observe each others’ states/actions. For this class of games, we first show that finding a stationary Nash equilibrium (NE) policy without any assumption on the reward functions is intractable. However, for general reward functions, we develop polynomial-time learning algorithms based on dual averaging and dual mirror descent, which converge in terms of the averaged Nikaido–Isoda distance to the set of [math]-NE policies almost surely or in expectation. In particular, under extra assumptions on the reward functions such as social concavity, we derive polynomial upper bounds on the number of iterates to achieve an [math]-NE policy with high probability. Finally, we evaluate the effectiveness of the proposed algorithms in learning [math]-NE policies using numerical experiments for energy management in smart grids.","PeriodicalId":49531,"journal":{"name":"SIAM Journal on Control and Optimization","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140008876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
SIAM Journal on Control and Optimization, Volume 62, Issue 1, Page 776-798, February 2024. Abstract. Though there are hundreds of papers on rational spectral factorization, most of them are concerned with full-rank spectral densities. In this paper we propose a novel approach for spectral factorization of a rank-deficient spectral density, leading to a minimum-phase full-rank spectral factor, in both the discrete-time and continuous-time cases. Compared with several approaches to low-rank spectral factorization, our approach exploits a deterministic relation inside the factor, leading to high computational efficiency. In addition, we show that this method is easily used in identification of low-rank processes and in Wiener filtering.
{"title":"Spectral Factorization of Rank-Deficient Rational Densities","authors":"Wenqi Cao, Anders Lindquist","doi":"10.1137/23m1546622","DOIUrl":"https://doi.org/10.1137/23m1546622","url":null,"abstract":"SIAM Journal on Control and Optimization, Volume 62, Issue 1, Page 776-798, February 2024. <br/> Abstract. Though there are hundreds of papers on rational spectral factorization, most of them are concerned with full-rank spectral densities. In this paper we propose a novel approach for spectral factorization of a rank-deficient spectral density, leading to a minimum-phase full-rank spectral factor, in both the discrete-time and continuous-time cases. Compared with several approaches to low-rank spectral factorization, our approach exploits a deterministic relation inside the factor, leading to high computational efficiency. In addition, we show that this method is easily used in identification of low-rank processes and in Wiener filtering.","PeriodicalId":49531,"journal":{"name":"SIAM Journal on Control and Optimization","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2024-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139917628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
SIAM Journal on Control and Optimization, Volume 62, Issue 1, Page 724-751, February 2024. Abstract. This paper studies a leader-following rendezvous problem for the generalized Cucker–Smale model, a double-integrator multiagent system, on some Riemannian manifolds. By using intrinsic properties of the covariant derivative, logarithmic map, and parallel transport on the Riemannian manifolds, we design a feedback control law and prove that this feedback control law enables all followers to track the trajectory of the moving leader when the Riemannian manifold is compact or flat. As concrete examples, we consider the leader-following rendezvous problem on the unit sphere, in Euclidean space, on the unit circle, and infinite cylinder and present the corresponding feedback control laws. Meanwhile, numerical examples are given for the aforementioned Riemannian manifolds to illustrate and verify the theoretical results.
{"title":"Leader-Following Rendezvous Control for Generalized Cucker-Smale Model on Riemannian Manifolds","authors":"Xiaoyu Li, Yuhu Wu, Lining Ru","doi":"10.1137/23m1545811","DOIUrl":"https://doi.org/10.1137/23m1545811","url":null,"abstract":"SIAM Journal on Control and Optimization, Volume 62, Issue 1, Page 724-751, February 2024. <br/> Abstract. This paper studies a leader-following rendezvous problem for the generalized Cucker–Smale model, a double-integrator multiagent system, on some Riemannian manifolds. By using intrinsic properties of the covariant derivative, logarithmic map, and parallel transport on the Riemannian manifolds, we design a feedback control law and prove that this feedback control law enables all followers to track the trajectory of the moving leader when the Riemannian manifold is compact or flat. As concrete examples, we consider the leader-following rendezvous problem on the unit sphere, in Euclidean space, on the unit circle, and infinite cylinder and present the corresponding feedback control laws. Meanwhile, numerical examples are given for the aforementioned Riemannian manifolds to illustrate and verify the theoretical results.","PeriodicalId":49531,"journal":{"name":"SIAM Journal on Control and Optimization","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2024-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139758354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
SIAM Journal on Control and Optimization, Volume 62, Issue 1, Page 752-775, February 2024. Abstract. This paper is concerned with an optimal control problem for a mean-field linear stochastic differential equation with a quadratic functional in the infinite time horizon. Under suitable conditions, including the stabilizability, the (strong) exponential, integral, and mean-square turnpike properties for the optimal pair are established. The keys are to correctly formulate the corresponding static optimization problem and find the equations determining the correction processes. These have revealed the main feature of the stochastic problems which are significantly different from the deterministic version of the theory.
{"title":"Turnpike Properties for Mean-Field Linear-Quadratic Optimal Control Problems","authors":"Jingrui Sun, Jiongmin Yong","doi":"10.1137/22m1524187","DOIUrl":"https://doi.org/10.1137/22m1524187","url":null,"abstract":"SIAM Journal on Control and Optimization, Volume 62, Issue 1, Page 752-775, February 2024. <br/> Abstract. This paper is concerned with an optimal control problem for a mean-field linear stochastic differential equation with a quadratic functional in the infinite time horizon. Under suitable conditions, including the stabilizability, the (strong) exponential, integral, and mean-square turnpike properties for the optimal pair are established. The keys are to correctly formulate the corresponding static optimization problem and find the equations determining the correction processes. These have revealed the main feature of the stochastic problems which are significantly different from the deterministic version of the theory.","PeriodicalId":49531,"journal":{"name":"SIAM Journal on Control and Optimization","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2024-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139758465","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
SIAM Journal on Control and Optimization, Volume 62, Issue 1, Page 699-723, February 2024. Abstract. The use of coordinate processes for the modeling of impulse control for general Markov processes typically involves the construction of a probability measure on a countable product of copies of the path space. In addition, admissibility of an impulse control policy requires that the random times of the interventions be stopping times with respect to different filtrations arising from the different component coordinate processes. When the underlying Markov process has continuous paths, however, a simpler model can be developed which takes the single path space as its probability space and uses the natural filtration with respect to which the intervention times must be stopping times. Moreover, this model construction allows for impulse control with random effects whereby the decision maker selects a distribution of the new state. This paper gives the construction of the probability measure on the path space for an admissible intervention policy subject to a randomized impulse mechanism. In addition, a class of polices is defined for which the paths between interventions are independent and a further subclass for which the cycles following the initial cycle are identically distributed. A benefit of this smaller subclass of policies is that one is allowed to use classical renewal arguments to analyze long-term average control problems. Further, the paper defines a class of stationary impulse policies for which the family of models gives a Markov family. The decision to use an [math] ordering policy in inventory management provides an example of an impulse policy for which the process has independent and identically distributed cycles and the family of models forms a Markov family.
{"title":"On the Modeling of Impulse Control with Random Effects for Continuous Markov Processes","authors":"Kurt L. Helmes, Richard H. Stockbridge, Chao Zhu","doi":"10.1137/19m1286967","DOIUrl":"https://doi.org/10.1137/19m1286967","url":null,"abstract":"SIAM Journal on Control and Optimization, Volume 62, Issue 1, Page 699-723, February 2024. <br/> Abstract. The use of coordinate processes for the modeling of impulse control for general Markov processes typically involves the construction of a probability measure on a countable product of copies of the path space. In addition, admissibility of an impulse control policy requires that the random times of the interventions be stopping times with respect to different filtrations arising from the different component coordinate processes. When the underlying Markov process has continuous paths, however, a simpler model can be developed which takes the single path space as its probability space and uses the natural filtration with respect to which the intervention times must be stopping times. Moreover, this model construction allows for impulse control with random effects whereby the decision maker selects a distribution of the new state. This paper gives the construction of the probability measure on the path space for an admissible intervention policy subject to a randomized impulse mechanism. In addition, a class of polices is defined for which the paths between interventions are independent and a further subclass for which the cycles following the initial cycle are identically distributed. A benefit of this smaller subclass of policies is that one is allowed to use classical renewal arguments to analyze long-term average control problems. Further, the paper defines a class of stationary impulse policies for which the family of models gives a Markov family. The decision to use an [math] ordering policy in inventory management provides an example of an impulse policy for which the process has independent and identically distributed cycles and the family of models forms a Markov family.","PeriodicalId":49531,"journal":{"name":"SIAM Journal on Control and Optimization","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2024-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139758456","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Regina S. Burachik, Bethany I. Caldwell, C. Yalçin Kaya, Walaa M. Moursi
SIAM Journal on Control and Optimization, Volume 62, Issue 1, Page 680-698, February 2024. Abstract. We explore the relationship between the dual of a weighted minimum-energy control problem, a special case of linear-quadratic optimal control problems, and the Douglas–Rachford (DR) algorithm. We obtain an expression for the fixed point of the DR operator as applied to solving the optimal control problem, which in turn devises a certificate of optimality that can be employed for numerical verification. The fixed point and the optimality check are illustrated in two example optimal control problems.
{"title":"Optimal Control Duality and the Douglas–Rachford Algorithm","authors":"Regina S. Burachik, Bethany I. Caldwell, C. Yalçin Kaya, Walaa M. Moursi","doi":"10.1137/23m1558549","DOIUrl":"https://doi.org/10.1137/23m1558549","url":null,"abstract":"SIAM Journal on Control and Optimization, Volume 62, Issue 1, Page 680-698, February 2024. <br/> Abstract. We explore the relationship between the dual of a weighted minimum-energy control problem, a special case of linear-quadratic optimal control problems, and the Douglas–Rachford (DR) algorithm. We obtain an expression for the fixed point of the DR operator as applied to solving the optimal control problem, which in turn devises a certificate of optimality that can be employed for numerical verification. The fixed point and the optimality check are illustrated in two example optimal control problems.","PeriodicalId":49531,"journal":{"name":"SIAM Journal on Control and Optimization","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2024-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139758464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
SIAM Journal on Control and Optimization, Volume 62, Issue 1, Page 650-679, February 2024. Abstract. We develop a regression-based primal-dual martingale approach for solving discrete time, finite-horizon MDPs. The state and action spaces may be finite or infinite (but regular enough) subsets of Euclidean space. Consequently, our method allows for the construction of tight upper and lower-biased approximations of the value functions, providing precise estimates of the optimal policy. Importantly, we prove error bounds for the estimated duality gap featuring polynomial dependence on the time horizon. Additionally, we observe sublinear dependence of the stochastic part of the error on the cardinality/dimension of the state and action spaces. From a computational perspective, our proposed method is efficient. Unlike typical duality-based methods for optimal control problems in the literature, the Monte Carlo procedures involved here do not require nested simulations.
{"title":"Primal-Dual Regression Approach for Markov Decision Processes with General State and Action Spaces","authors":"Denis Belomestny, John Schoenmakers","doi":"10.1137/22m1526010","DOIUrl":"https://doi.org/10.1137/22m1526010","url":null,"abstract":"SIAM Journal on Control and Optimization, Volume 62, Issue 1, Page 650-679, February 2024. <br/> Abstract. We develop a regression-based primal-dual martingale approach for solving discrete time, finite-horizon MDPs. The state and action spaces may be finite or infinite (but regular enough) subsets of Euclidean space. Consequently, our method allows for the construction of tight upper and lower-biased approximations of the value functions, providing precise estimates of the optimal policy. Importantly, we prove error bounds for the estimated duality gap featuring polynomial dependence on the time horizon. Additionally, we observe sublinear dependence of the stochastic part of the error on the cardinality/dimension of the state and action spaces. From a computational perspective, our proposed method is efficient. Unlike typical duality-based methods for optimal control problems in the literature, the Monte Carlo procedures involved here do not require nested simulations.","PeriodicalId":49531,"journal":{"name":"SIAM Journal on Control and Optimization","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2024-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139758454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}