Pub Date : 2023-12-16DOI: 10.1007/s10898-023-01347-z
Quoc Tran-Dinh
We develop two “Nesterov’s accelerated” variants of the well-known extragradient method to approximate a solution of a co-hypomonotone inclusion constituted by the sum of two operators, where one is Lipschitz continuous and the other is possibly multivalued. The first scheme can be viewed as an accelerated variant of Tseng’s forward-backward-forward splitting (FBFS) method, while the second one is a Nesterov’s accelerated variant of the “past” FBFS scheme, which requires only one evaluation of the Lipschitz operator and one resolvent of the multivalued mapping. Under appropriate conditions on the parameters, we theoretically prove that both algorithms achieve (mathcal {O}left( 1/kright) ) last-iterate convergence rates on the residual norm, where k is the iteration counter. Our results can be viewed as alternatives of a recent class of Halpern-type methods for root-finding problems. For comparison, we also provide a new convergence analysis of the two recent extra-anchored gradient-type methods for solving co-hypomonotone inclusions.
我们开发了著名的外梯度法的两个 "涅斯捷罗夫加速 "变体,用于近似求解由两个算子之和构成的共hypomonotone包容体,其中一个算子是立普齐兹连续的,另一个算子可能是多值的。第一种方案可视为曾氏前向-后向-前向分裂(FBFS)方法的加速变体,而第二种方案则是 "过去 "FBFS 方案的涅斯捷罗夫加速变体,只需对 Lipschitz 算子和多值映射的一个解析量进行一次求值。在参数的适当条件下,我们从理论上证明了这两种算法在残差规范上都达到了最后迭代收敛率,其中 k 是迭代计数器。我们的结果可以看作是最近一类用于寻根问题的哈尔彭类方法的替代方案。为了进行比较,我们还对最近的两种用于求解共假单调夹杂的外锚定梯度型方法进行了新的收敛分析。
{"title":"Extragradient-type methods with $$mathcal {O}left( 1/kright) $$ last-iterate convergence rates for co-hypomonotone inclusions","authors":"Quoc Tran-Dinh","doi":"10.1007/s10898-023-01347-z","DOIUrl":"https://doi.org/10.1007/s10898-023-01347-z","url":null,"abstract":"<p>We develop two “Nesterov’s accelerated” variants of the well-known extragradient method to approximate a solution of a co-hypomonotone inclusion constituted by the sum of two operators, where one is Lipschitz continuous and the other is possibly multivalued. The first scheme can be viewed as an accelerated variant of Tseng’s forward-backward-forward splitting (FBFS) method, while the second one is a Nesterov’s accelerated variant of the “past” FBFS scheme, which requires only one evaluation of the Lipschitz operator and one resolvent of the multivalued mapping. Under appropriate conditions on the parameters, we theoretically prove that both algorithms achieve <span>(mathcal {O}left( 1/kright) )</span> last-iterate convergence rates on the residual norm, where <i>k</i> is the iteration counter. Our results can be viewed as alternatives of a recent class of Halpern-type methods for root-finding problems. For comparison, we also provide a new convergence analysis of the two recent extra-anchored gradient-type methods for solving co-hypomonotone inclusions.</p>","PeriodicalId":15961,"journal":{"name":"Journal of Global Optimization","volume":"7 1","pages":""},"PeriodicalIF":1.8,"publicationDate":"2023-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138681693","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-16DOI: 10.1007/s10898-023-01348-y
Xianfu Wang, Ziyuan Wang
We propose a Bregman inertial forward-reflected-backward (BiFRB) method for nonconvex composite problems. Assuming the generalized concave Kurdyka-Łojasiewicz property, we obtain sequential convergence of BiFRB, as well as convergence rates on both the function value and actual sequence. One distinguishing feature in our analysis is that we utilize a careful treatment of merit function parameters, circumventing the usual restrictive assumption on the inertial parameters. We also present formulae for the Bregman subproblem, supplementing not only BiFRB but also the work of Boţ-Csetnek-László and Boţ-Csetnek. Numerical simulations are conducted to evaluate the performance of our proposed algorithm.
{"title":"A Bregman inertial forward-reflected-backward method for nonconvex minimization","authors":"Xianfu Wang, Ziyuan Wang","doi":"10.1007/s10898-023-01348-y","DOIUrl":"https://doi.org/10.1007/s10898-023-01348-y","url":null,"abstract":"<p>We propose a Bregman inertial forward-reflected-backward (BiFRB) method for nonconvex composite problems. Assuming the generalized concave Kurdyka-Łojasiewicz property, we obtain sequential convergence of BiFRB, as well as convergence rates on both the function value and actual sequence. One distinguishing feature in our analysis is that we utilize a careful treatment of merit function parameters, circumventing the usual restrictive assumption on the inertial parameters. We also present formulae for the Bregman subproblem, supplementing not only BiFRB but also the work of Boţ-Csetnek-László and Boţ-Csetnek. Numerical simulations are conducted to evaluate the performance of our proposed algorithm.\u0000</p>","PeriodicalId":15961,"journal":{"name":"Journal of Global Optimization","volume":"8 1","pages":""},"PeriodicalIF":1.8,"publicationDate":"2023-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138681761","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-14DOI: 10.1007/s10898-023-01343-3
Fusheng Bai, Dongchi Zou, Yutao Wei
Many practical problems involve the optimization of computationally expensive blackbox functions. The computational cost resulting from expensive function evaluations considerably limits the number of true objective function evaluations allowed in order to find a good solution. In this paper, we propose a clustering-based surrogate-assisted evolutionary algorithm, in which a clustering-based local search technique is embedded into the radial basis function surrogate-assisted evolutionary algorithm framework to obtain sample points which might be close to the local solutions of the actual optimization problem. The algorithm generates sample points cyclically by the clustering-based local search, which takes the cluster centers of the ultimate population obtained by the differential evolution iterations applied to the surrogate model in one cycle as new sample points, and these new sample points are added into the initial population for the differential evolution iterations of the next cycle. In this way the exploration and the exploitation are better balanced during the search process. To verify the effectiveness of the present algorithm, it is compared with four state-of-the-art surrogate-assisted evolutionary algorithms on 24 synthetic test problems and one application problem. Experimental results show that the present algorithm outperforms other algorithms on most synthetic test problems and the application problem.
{"title":"A surrogate-assisted evolutionary algorithm with clustering-based sampling for high-dimensional expensive blackbox optimization","authors":"Fusheng Bai, Dongchi Zou, Yutao Wei","doi":"10.1007/s10898-023-01343-3","DOIUrl":"https://doi.org/10.1007/s10898-023-01343-3","url":null,"abstract":"<p>Many practical problems involve the optimization of computationally expensive blackbox functions. The computational cost resulting from expensive function evaluations considerably limits the number of true objective function evaluations allowed in order to find a good solution. In this paper, we propose a clustering-based surrogate-assisted evolutionary algorithm, in which a clustering-based local search technique is embedded into the radial basis function surrogate-assisted evolutionary algorithm framework to obtain sample points which might be close to the local solutions of the actual optimization problem. The algorithm generates sample points cyclically by the clustering-based local search, which takes the cluster centers of the ultimate population obtained by the differential evolution iterations applied to the surrogate model in one cycle as new sample points, and these new sample points are added into the initial population for the differential evolution iterations of the next cycle. In this way the exploration and the exploitation are better balanced during the search process. To verify the effectiveness of the present algorithm, it is compared with four state-of-the-art surrogate-assisted evolutionary algorithms on 24 synthetic test problems and one application problem. Experimental results show that the present algorithm outperforms other algorithms on most synthetic test problems and the application problem.</p>","PeriodicalId":15961,"journal":{"name":"Journal of Global Optimization","volume":"33 1","pages":""},"PeriodicalIF":1.8,"publicationDate":"2023-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138630136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-14DOI: 10.1007/s10898-023-01345-1
Ksenia Bestuzheva, Antonia Chmiela, Benjamin Müller, Felipe Serrano, Stefan Vigerske, Fabian Wegscheider
For over 10 years, the constraint integer programming framework SCIP has been extended by capabilities for the solution of convex and nonconvex mixed-integer nonlinear programs (MINLPs). With the recently published version 8.0, these capabilities have been largely reworked and extended. This paper discusses the motivations for recent changes and provides an overview of features that are particular to MINLP solving in SCIP. Further, difficulties in benchmarking global MINLP solvers are discussed and a comparison with several state-of-the-art global MINLP solvers is provided.
{"title":"Global optimization of mixed-integer nonlinear programs with SCIP 8","authors":"Ksenia Bestuzheva, Antonia Chmiela, Benjamin Müller, Felipe Serrano, Stefan Vigerske, Fabian Wegscheider","doi":"10.1007/s10898-023-01345-1","DOIUrl":"https://doi.org/10.1007/s10898-023-01345-1","url":null,"abstract":"<p>For over 10 years, the constraint integer programming framework SCIP has been extended by capabilities for the solution of convex and nonconvex mixed-integer nonlinear programs (MINLPs). With the recently published version 8.0, these capabilities have been largely reworked and extended. This paper discusses the motivations for recent changes and provides an overview of features that are particular to MINLP solving in SCIP. Further, difficulties in benchmarking global MINLP solvers are discussed and a comparison with several state-of-the-art global MINLP solvers is provided.</p>","PeriodicalId":15961,"journal":{"name":"Journal of Global Optimization","volume":"38 1","pages":""},"PeriodicalIF":1.8,"publicationDate":"2023-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138629926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-11DOI: 10.1007/s10898-023-01346-0
Zhen-Ping Yang, Yong Zhao, Gui-Hua Lin
In this paper, we propose a variable sample-size optimistic mirror descent algorithm under the Bregman distance for a class of stochastic mixed variational inequalities. Different from those conventional variable sample-size extragradient algorithms to evaluate the expected mapping twice at each iteration, our algorithm requires only one evaluation of the expected mapping and hence can significantly reduce the computation load. In the monotone case, the proposed algorithm can achieve ({mathcal {O}}(1/t)) ergodic convergence rate in terms of the expected restricted gap function and, under the strongly generalized monotonicity condition, the proposed algorithm has a locally linear convergence rate of the Bregman distance between iterations and solutions when the sample size increases geometrically. Furthermore, we derive some results on stochastic local stability under the generalized monotonicity condition. Numerical experiments indicate that the proposed algorithm compares favorably with some existing methods.
{"title":"Variable sample-size optimistic mirror descent algorithm for stochastic mixed variational inequalities","authors":"Zhen-Ping Yang, Yong Zhao, Gui-Hua Lin","doi":"10.1007/s10898-023-01346-0","DOIUrl":"https://doi.org/10.1007/s10898-023-01346-0","url":null,"abstract":"<p>In this paper, we propose a variable sample-size optimistic mirror descent algorithm under the Bregman distance for a class of stochastic mixed variational inequalities. Different from those conventional variable sample-size extragradient algorithms to evaluate the expected mapping twice at each iteration, our algorithm requires only one evaluation of the expected mapping and hence can significantly reduce the computation load. In the monotone case, the proposed algorithm can achieve <span>({mathcal {O}}(1/t))</span> ergodic convergence rate in terms of the expected restricted gap function and, under the strongly generalized monotonicity condition, the proposed algorithm has a locally linear convergence rate of the Bregman distance between iterations and solutions when the sample size increases geometrically. Furthermore, we derive some results on stochastic local stability under the generalized monotonicity condition. Numerical experiments indicate that the proposed algorithm compares favorably with some existing methods.</p>","PeriodicalId":15961,"journal":{"name":"Journal of Global Optimization","volume":"34 1","pages":""},"PeriodicalIF":1.8,"publicationDate":"2023-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138574424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-24DOI: 10.1007/s10898-023-01341-5
E. L. Dias Júnior, P. J. S. Santos, A. Soubeyran, J. C. O. Souza
This paper has two parts. In the mathematical part, we present two inexact versions of the proximal point method for solving quasi-equilibrium problems (QEP) in Hilbert spaces. Under mild assumptions, we prove that the methods find a solution to the quasi-equilibrium problem with an approximated computation of each iteration or using a perturbation of the regularized bifunction. In the behavioral part, we justify the choice of the new perturbation, with the help of the main example that drives quasi-equilibrium problems: the Cournot duopoly model, which founded game theory. This requires to exhibit a new QEP reformulation of the Cournot model that will appear more intuitive and rigorous. It leads directly to the formulation of our perturbation function. Some numerical experiments show the performance of the proposed methods.
{"title":"On inexact versions of a quasi-equilibrium problem: a Cournot duopoly perspective","authors":"E. L. Dias Júnior, P. J. S. Santos, A. Soubeyran, J. C. O. Souza","doi":"10.1007/s10898-023-01341-5","DOIUrl":"https://doi.org/10.1007/s10898-023-01341-5","url":null,"abstract":"<p>This paper has two parts. In the mathematical part, we present two inexact versions of the proximal point method for solving quasi-equilibrium problems (QEP) in Hilbert spaces. Under mild assumptions, we prove that the methods find a solution to the quasi-equilibrium problem with an approximated computation of each iteration or using a perturbation of the regularized bifunction. In the behavioral part, we justify the choice of the new perturbation, with the help of the main example that drives quasi-equilibrium problems: the Cournot duopoly model, which founded game theory. This requires to exhibit a new QEP reformulation of the Cournot model that will appear more intuitive and rigorous. It leads directly to the formulation of our perturbation function. Some numerical experiments show the performance of the proposed methods.</p>","PeriodicalId":15961,"journal":{"name":"Journal of Global Optimization","volume":"13 1","pages":""},"PeriodicalIF":1.8,"publicationDate":"2023-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138524736","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-24DOI: 10.1007/s10898-023-01342-4
Dawid Tarłowski
This paper presents general theoretical studies on asymptotic convergence rate (ACR) for finite dimensional optimization. Given the continuous problem function and discrete time stochastic optimization process, the ACR is the optimal constant for control of the asymptotic behaviour of the expected approximation errors. Under general assumptions, condition ACR(<1) implies the linear behaviour of the expected time of hitting the (varepsilon )- optimal sublevel set with (varepsilon rightarrow 0^+ ) and determines the upper bound for the convergence rate of the trajectories of the process. This paper provides general characterization of ACR and, in particular, shows that some algorithms cannot converge linearly fast for any nontrivial continuous optimization problem. The relation between asymptotic convergence rate in the objective space and asymptotic convergence rate in the search space is provided. Examples and numerical simulations with use of a (1+1) self-adaptive evolution strategy and other algorithms are presented.
{"title":"On asymptotic convergence rate of random search","authors":"Dawid Tarłowski","doi":"10.1007/s10898-023-01342-4","DOIUrl":"https://doi.org/10.1007/s10898-023-01342-4","url":null,"abstract":"<p>This paper presents general theoretical studies on asymptotic convergence rate (ACR) for finite dimensional optimization. Given the continuous problem function and discrete time stochastic optimization process, the ACR is the optimal constant for control of the asymptotic behaviour of the expected approximation errors. Under general assumptions, condition ACR<span>(<1)</span> implies the linear behaviour of the expected time of hitting the <span>(varepsilon )</span>- optimal sublevel set with <span>(varepsilon rightarrow 0^+ )</span> and determines the upper bound for the convergence rate of the trajectories of the process. This paper provides general characterization of ACR and, in particular, shows that some algorithms cannot converge linearly fast for any nontrivial continuous optimization problem. The relation between asymptotic convergence rate in the objective space and asymptotic convergence rate in the search space is provided. Examples and numerical simulations with use of a (1+1) self-adaptive evolution strategy and other algorithms are presented.</p>","PeriodicalId":15961,"journal":{"name":"Journal of Global Optimization","volume":"73 4","pages":""},"PeriodicalIF":1.8,"publicationDate":"2023-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138524762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-24DOI: 10.1007/s10898-023-01340-6
Lulin Tan, Wei Hong Yang, Jinbiao Pan
In this paper, we give some existence theorems of solutions to (Gamma )-robust counterparts of gap function formulations of uncertain linear complementarity problems, in which (Gamma ) plays a role in adjusting the robustness of the model against the level of conservatism of solutions. If the (Gamma )-robust uncertainty set is nonconvex, it is hard to prove the existence of solutions to the corresponding robust counterpart. Using techniques of asymptotic functions, we establish existence theorems of solutions to the corresponding robust counterpart. For the case of nonconvex (Gamma )-robust ellipsoidal uncertainty sets, these existence results are not proved in the paper [Krebs et al., Int. Trans. Oper. Res. 29 (2022), pp. 417–441]; for the case of convex (Gamma )-robust ellipsoidal uncertainty sets, our existence theorems are obtained under the conditions which are much weaker than those in Krebs’ paper. Finally, a case study for the uncertain traffic equilibrium problem is considered to illustrate the effects of nonconvex uncertainty sets on the level of conservatism of robust solutions.
{"title":"Existence of solutions to $$Gamma $$ -robust counterparts of gap function formulations of uncertain LCPs with ellipsoidal uncertainty sets","authors":"Lulin Tan, Wei Hong Yang, Jinbiao Pan","doi":"10.1007/s10898-023-01340-6","DOIUrl":"https://doi.org/10.1007/s10898-023-01340-6","url":null,"abstract":"<p>In this paper, we give some existence theorems of solutions to <span>(Gamma )</span>-robust counterparts of gap function formulations of uncertain linear complementarity problems, in which <span>(Gamma )</span> plays a role in adjusting the robustness of the model against the level of conservatism of solutions. If the <span>(Gamma )</span>-robust uncertainty set is nonconvex, it is hard to prove the existence of solutions to the corresponding robust counterpart. Using techniques of asymptotic functions, we establish existence theorems of solutions to the corresponding robust counterpart. For the case of nonconvex <span>(Gamma )</span>-robust ellipsoidal uncertainty sets, these existence results are not proved in the paper [Krebs et al., Int. Trans. Oper. Res. 29 (2022), pp. 417–441]; for the case of convex <span>(Gamma )</span>-robust ellipsoidal uncertainty sets, our existence theorems are obtained under the conditions which are much weaker than those in Krebs’ paper. Finally, a case study for the uncertain traffic equilibrium problem is considered to illustrate the effects of nonconvex uncertainty sets on the level of conservatism of robust solutions.\u0000</p>","PeriodicalId":15961,"journal":{"name":"Journal of Global Optimization","volume":"55 2","pages":""},"PeriodicalIF":1.8,"publicationDate":"2023-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138524763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Configuring an heterogeneous smartgrid network: complexity and approximations for tree topologies","authors":"Dominique Barth, Thierry Mautor, Dimitri Watel, Marc-Antoine Weisser","doi":"10.1007/s10898-023-01338-0","DOIUrl":"https://doi.org/10.1007/s10898-023-01338-0","url":null,"abstract":"","PeriodicalId":15961,"journal":{"name":"Journal of Global Optimization","volume":"3 11","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136232310","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-14DOI: 10.1007/s10898-023-01337-1
Jie Jiang, Hailin Sun
{"title":"Discrete approximation for two-stage stochastic variational inequalities","authors":"Jie Jiang, Hailin Sun","doi":"10.1007/s10898-023-01337-1","DOIUrl":"https://doi.org/10.1007/s10898-023-01337-1","url":null,"abstract":"","PeriodicalId":15961,"journal":{"name":"Journal of Global Optimization","volume":"93 20","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134901110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}