Pub Date : 2022-09-26DOI: 10.1080/10556788.2022.2121832
H. Shi, M. Xuan, Figen Öztoprak, J. Nocedal
The goal of this paper is to investigate an approach for derivative-free optimization that has not received sufficient attention in the literature and is yet one of the simplest to implement and parallelize. In its simplest form, it consists of employing derivative-based methods for unconstrained or constrained optimization and replacing the gradient of the objective (and constraints) by finite-difference approximations. This approach is applicable to problems with or without noise in the functions. The differencing interval is determined by a bound on the second (or third) derivative and by the noise level, which is assumed to be known or to be accessible through difference tables or sampling. The use of finite-difference gradient approximations has been largely dismissed in the derivative-free optimization literature as too expensive in terms of function evaluations or as impractical in the presence of noise. However, the test results presented in this paper suggest that it has much to be recommended. The experiments compare newuoa, dfo-ls and cobyla against finite-difference versions of l-bfgs, lmder and knitro on three classes of problems: general unconstrained problems, nonlinear least squares problems and nonlinear programs with inequality constraints.
{"title":"On the numerical performance of finite-difference-based methods for derivative-free optimization","authors":"H. Shi, M. Xuan, Figen Öztoprak, J. Nocedal","doi":"10.1080/10556788.2022.2121832","DOIUrl":"https://doi.org/10.1080/10556788.2022.2121832","url":null,"abstract":"The goal of this paper is to investigate an approach for derivative-free optimization that has not received sufficient attention in the literature and is yet one of the simplest to implement and parallelize. In its simplest form, it consists of employing derivative-based methods for unconstrained or constrained optimization and replacing the gradient of the objective (and constraints) by finite-difference approximations. This approach is applicable to problems with or without noise in the functions. The differencing interval is determined by a bound on the second (or third) derivative and by the noise level, which is assumed to be known or to be accessible through difference tables or sampling. The use of finite-difference gradient approximations has been largely dismissed in the derivative-free optimization literature as too expensive in terms of function evaluations or as impractical in the presence of noise. However, the test results presented in this paper suggest that it has much to be recommended. The experiments compare newuoa, dfo-ls and cobyla against finite-difference versions of l-bfgs, lmder and knitro on three classes of problems: general unconstrained problems, nonlinear least squares problems and nonlinear programs with inequality constraints.","PeriodicalId":124811,"journal":{"name":"Optimization Methods and Software","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128557413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-26DOI: 10.1080/10556788.2022.2119233
Quoc Tran-Dinh, Deyi Liu
We develop a novel unified randomized block-coordinate primal-dual algorithm to solve a class of nonsmooth constrained convex optimization problems, which covers different existing variants and model settings from the literature. We prove that our algorithm achieves and convergence rates in two cases: merely convexity and strong convexity, respectively, where k is the iteration counter and n is the number of block-coordinates. These rates are known to be optimal (up to a constant factor) when n = 1. Our convergence rates are obtained through three criteria: primal objective residual and primal feasibility violation, dual objective residual, and primal-dual expected gap. Moreover, our rates for the primal problem are on the last-iterate sequence. Our dual convergence guarantee requires additionally a Lipschitz continuity assumption. We specify our algorithm to handle two important special cases, where our rates are still applied. Finally, we verify our algorithm on two well-studied numerical examples and compare it with two existing methods. Our results show that the proposed method has encouraging performance on different experiments.
{"title":"A new randomized primal-dual algorithm for convex optimization with fast last iterate convergence rates","authors":"Quoc Tran-Dinh, Deyi Liu","doi":"10.1080/10556788.2022.2119233","DOIUrl":"https://doi.org/10.1080/10556788.2022.2119233","url":null,"abstract":"We develop a novel unified randomized block-coordinate primal-dual algorithm to solve a class of nonsmooth constrained convex optimization problems, which covers different existing variants and model settings from the literature. We prove that our algorithm achieves and convergence rates in two cases: merely convexity and strong convexity, respectively, where k is the iteration counter and n is the number of block-coordinates. These rates are known to be optimal (up to a constant factor) when n = 1. Our convergence rates are obtained through three criteria: primal objective residual and primal feasibility violation, dual objective residual, and primal-dual expected gap. Moreover, our rates for the primal problem are on the last-iterate sequence. Our dual convergence guarantee requires additionally a Lipschitz continuity assumption. We specify our algorithm to handle two important special cases, where our rates are still applied. Finally, we verify our algorithm on two well-studied numerical examples and compare it with two existing methods. Our results show that the proposed method has encouraging performance on different experiments.","PeriodicalId":124811,"journal":{"name":"Optimization Methods and Software","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129239659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-09DOI: 10.1080/10556788.2022.2117357
Yan Tang, Haiyun Zhou
The purpose of this paper is to propose a new alternative step size algorithm without using projections and without prior knowledge of operator norms to the split equality fixed point problem for a class of quasi-pseudo-contractive mappings. Under appropriate conditions, weak and strong convergence theorems for the presented algorithms are obtained, respectively. Furthermore, the algorithm proposed in this paper is also applied to approximate the solution of the split equality equilibrium and split equality inclusion problems.
{"title":"New iterative algorithms with self-adaptive step size for solving split equality fixed point problem and its applications","authors":"Yan Tang, Haiyun Zhou","doi":"10.1080/10556788.2022.2117357","DOIUrl":"https://doi.org/10.1080/10556788.2022.2117357","url":null,"abstract":"The purpose of this paper is to propose a new alternative step size algorithm without using projections and without prior knowledge of operator norms to the split equality fixed point problem for a class of quasi-pseudo-contractive mappings. Under appropriate conditions, weak and strong convergence theorems for the presented algorithms are obtained, respectively. Furthermore, the algorithm proposed in this paper is also applied to approximate the solution of the split equality equilibrium and split equality inclusion problems.","PeriodicalId":124811,"journal":{"name":"Optimization Methods and Software","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115920672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-23DOI: 10.1080/10556788.2023.2214837
Valentin Duruisseaux, M. Leok
Geometric numerical integration has recently been exploited to design symplectic accelerated optimization algorithms by simulating the Lagrangian and Hamiltonian systems from the variational framework introduced in Wibisono et al. In this paper, we discuss practical considerations which can significantly boost the computational performance of these optimization algorithms, and considerably simplify the tuning process. In particular, we investigate how momentum restarting schemes ameliorate computational efficiency and robustness by reducing the undesirable effect of oscillations, and ease the tuning process by making time-adaptivity superfluous. We also discuss how temporal looping helps avoiding instability issues caused by numerical precision, without harming the computational efficiency of the algorithms. Finally, we compare the efficiency and robustness of different geometric integration techniques, and study the effects of the different parameters in the algorithms to inform and simplify tuning in practice. From this paper emerge symplectic accelerated optimization algorithms whose computational efficiency, stability and robustness have been improved, and which are now much simpler to use and tune for practical applications.
{"title":"Practical perspectives on symplectic accelerated optimization","authors":"Valentin Duruisseaux, M. Leok","doi":"10.1080/10556788.2023.2214837","DOIUrl":"https://doi.org/10.1080/10556788.2023.2214837","url":null,"abstract":"Geometric numerical integration has recently been exploited to design symplectic accelerated optimization algorithms by simulating the Lagrangian and Hamiltonian systems from the variational framework introduced in Wibisono et al. In this paper, we discuss practical considerations which can significantly boost the computational performance of these optimization algorithms, and considerably simplify the tuning process. In particular, we investigate how momentum restarting schemes ameliorate computational efficiency and robustness by reducing the undesirable effect of oscillations, and ease the tuning process by making time-adaptivity superfluous. We also discuss how temporal looping helps avoiding instability issues caused by numerical precision, without harming the computational efficiency of the algorithms. Finally, we compare the efficiency and robustness of different geometric integration techniques, and study the effects of the different parameters in the algorithms to inform and simplify tuning in practice. From this paper emerge symplectic accelerated optimization algorithms whose computational efficiency, stability and robustness have been improved, and which are now much simpler to use and tune for practical applications.","PeriodicalId":124811,"journal":{"name":"Optimization Methods and Software","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127179302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-20DOI: 10.1080/10556788.2022.2091559
Mihai I. Florea
ABSTRACT The Inexact Gradient Method with Memory (IGMM) is able to considerably outperform the Gradient Method by employing a piece-wise linear lower model on the smooth part of the objective. However, the auxiliary problem can only be solved within a fixed tolerance at every iteration. The need to contain the inexactness narrows the range of problems to which IGMM can be applied and degrades the worst-case convergence rate. In this work, we show how a simple modification of IGMM removes the tolerance parameter from the analysis. The resulting Exact Gradient Method with Memory (EGMM) is as broadly applicable as the Bregman Distance Gradient Method/NoLips and has the same worst-case rate of , the best for its class. Under necessarily stricter assumptions, we can accelerate EGMM without error accumulation yielding an Accelerated Gradient Method with Memory (AGMM) possessing a worst-case rate of . In our preliminary computational experiments EGMM displays excellent performance, sometimes surpassing accelerated methods. When the model discards old information, AGMM also consistently exceeds the Fast Gradient Method.
{"title":"Exact gradient methods with memory","authors":"Mihai I. Florea","doi":"10.1080/10556788.2022.2091559","DOIUrl":"https://doi.org/10.1080/10556788.2022.2091559","url":null,"abstract":"ABSTRACT The Inexact Gradient Method with Memory (IGMM) is able to considerably outperform the Gradient Method by employing a piece-wise linear lower model on the smooth part of the objective. However, the auxiliary problem can only be solved within a fixed tolerance at every iteration. The need to contain the inexactness narrows the range of problems to which IGMM can be applied and degrades the worst-case convergence rate. In this work, we show how a simple modification of IGMM removes the tolerance parameter from the analysis. The resulting Exact Gradient Method with Memory (EGMM) is as broadly applicable as the Bregman Distance Gradient Method/NoLips and has the same worst-case rate of , the best for its class. Under necessarily stricter assumptions, we can accelerate EGMM without error accumulation yielding an Accelerated Gradient Method with Memory (AGMM) possessing a worst-case rate of . In our preliminary computational experiments EGMM displays excellent performance, sometimes surpassing accelerated methods. When the model discards old information, AGMM also consistently exceeds the Fast Gradient Method.","PeriodicalId":124811,"journal":{"name":"Optimization Methods and Software","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115279808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sequential Approximate Optimization (SAO) has been widely used in engineering optimization design problems to improve efficiency. The infilling strategy is one of the critical techniques of the SAO, which is of paramount importance to the surrogate model accuracy and optimization efficiency. In this paper, an adaptive parallel infill strategy for surrogate-based single-objective optimization is proposed within a multi-objective optimization framework to balance exploration and exploitation during the optimization process. Within this method, an inaccurate Pareto Front is adopted to assist the infilling of the sampling points. The proposed SAO method with its adaptive parallel sampling strategy is tested on several numerical test cases and an engineering test case with the optimization results compared to state-of-the-art optimization algorithms. The results show that the proposed SAO with the adaptive parallel sampling strategy possesses excellent performance and better stability.
{"title":"Sequential approximate optimization with adaptive parallel infill strategy assisted by inaccurate Pareto front","authors":"Wenjie Wang, Pengyu Wang, Jiawei Yang, Fei Xiao, Weihua Zhang, Zeping Wu","doi":"10.1080/10556788.2022.2091560","DOIUrl":"https://doi.org/10.1080/10556788.2022.2091560","url":null,"abstract":"Sequential Approximate Optimization (SAO) has been widely used in engineering optimization design problems to improve efficiency. The infilling strategy is one of the critical techniques of the SAO, which is of paramount importance to the surrogate model accuracy and optimization efficiency. In this paper, an adaptive parallel infill strategy for surrogate-based single-objective optimization is proposed within a multi-objective optimization framework to balance exploration and exploitation during the optimization process. Within this method, an inaccurate Pareto Front is adopted to assist the infilling of the sampling points. The proposed SAO method with its adaptive parallel sampling strategy is tested on several numerical test cases and an engineering test case with the optimization results compared to state-of-the-art optimization algorithms. The results show that the proposed SAO with the adaptive parallel sampling strategy possesses excellent performance and better stability.","PeriodicalId":124811,"journal":{"name":"Optimization Methods and Software","volume":"194 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132471580","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-19DOI: 10.1080/10556788.2022.2091563
L. Pang, Ming-Kun Zhang, X. Xiao
In this paper, we consider a type of semidefinite programming problem (MSDP), which involves many (not necessarily finite) of semidefinite constraints. MSDP can be established in a wide range of applications, including covering ellipsoids problem and truss topology design. We propose a random method based on a stochastic approximation technique for solving MSDP, without calculating and storing the multiplier. Under mild conditions, we establish the almost sure convergence and expected convergence rates of the proposed method. A variety of simulation experiments are carried out to support our theoretical results.
{"title":"A stochastic approximation method for convex programming with many semidefinite constraints","authors":"L. Pang, Ming-Kun Zhang, X. Xiao","doi":"10.1080/10556788.2022.2091563","DOIUrl":"https://doi.org/10.1080/10556788.2022.2091563","url":null,"abstract":"In this paper, we consider a type of semidefinite programming problem (MSDP), which involves many (not necessarily finite) of semidefinite constraints. MSDP can be established in a wide range of applications, including covering ellipsoids problem and truss topology design. We propose a random method based on a stochastic approximation technique for solving MSDP, without calculating and storing the multiplier. Under mild conditions, we establish the almost sure convergence and expected convergence rates of the proposed method. A variety of simulation experiments are carried out to support our theoretical results.","PeriodicalId":124811,"journal":{"name":"Optimization Methods and Software","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133822618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-19DOI: 10.1080/10556788.2022.2091562
Jia Hu, Congying Han, Tiande Guo, Tong Zhao
We consider minimizing a class of nonconvex composite stochastic optimization problems, and deterministic optimization problems whose objective function consists of an expectation function (or an average of finitely many smooth functions) and a weakly convex but potentially nonsmooth function. And in this paper, we focus on the theoretical properties of two types of stochastic splitting methods for solving these nonconvex optimization problems: stochastic alternating direction method of multipliers and stochastic proximal gradient descent. In particular, several inexact versions of these two types of splitting methods are studied. At each iteration, the proposed schemes inexactly solve their subproblems by using relative error criteria instead of exogenous and diminishing error rules, which allows our approaches to handle some complex regularized problems in statistics and machine learning. Under mild conditions, we obtain the convergence of the schemes and their computational complexity related to the evaluations on the component gradient of the smooth function, and find that some conclusions of their exact counterparts can be recovered.
{"title":"On inexact stochastic splitting methods for a class of nonconvex composite optimization problems with relative error","authors":"Jia Hu, Congying Han, Tiande Guo, Tong Zhao","doi":"10.1080/10556788.2022.2091562","DOIUrl":"https://doi.org/10.1080/10556788.2022.2091562","url":null,"abstract":"We consider minimizing a class of nonconvex composite stochastic optimization problems, and deterministic optimization problems whose objective function consists of an expectation function (or an average of finitely many smooth functions) and a weakly convex but potentially nonsmooth function. And in this paper, we focus on the theoretical properties of two types of stochastic splitting methods for solving these nonconvex optimization problems: stochastic alternating direction method of multipliers and stochastic proximal gradient descent. In particular, several inexact versions of these two types of splitting methods are studied. At each iteration, the proposed schemes inexactly solve their subproblems by using relative error criteria instead of exogenous and diminishing error rules, which allows our approaches to handle some complex regularized problems in statistics and machine learning. Under mild conditions, we obtain the convergence of the schemes and their computational complexity related to the evaluations on the component gradient of the smooth function, and find that some conclusions of their exact counterparts can be recovered.","PeriodicalId":124811,"journal":{"name":"Optimization Methods and Software","volume":"122 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116841254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-06-01DOI: 10.1080/10556788.2022.2078823
Petar Kunštek, M. Vrdoljak
We consider optimal design problems in stationary diffusion for mixtures of two isotropic phases. The goal is to find an optimal distribution of the phases such that the energy functional is maximized. By following the identity perturbation method, we calculate the first- and second-order shape derivatives in the distributional representation under weak regularity assumptions. Ascent methods based on the distributed first- and second-order shape derivatives are implemented and tested in classes of problems for which the classical solutions exist and can be explicitly calculated from the optimality conditions. A proposed quasi-Newton method offers a better ascent vector compared to gradient methods, reaching the optimal design in half as many steps. The method applies well also for multiple state problems.
{"title":"A quasi-Newton method in shape optimization for a transmission problem","authors":"Petar Kunštek, M. Vrdoljak","doi":"10.1080/10556788.2022.2078823","DOIUrl":"https://doi.org/10.1080/10556788.2022.2078823","url":null,"abstract":"We consider optimal design problems in stationary diffusion for mixtures of two isotropic phases. The goal is to find an optimal distribution of the phases such that the energy functional is maximized. By following the identity perturbation method, we calculate the first- and second-order shape derivatives in the distributional representation under weak regularity assumptions. Ascent methods based on the distributed first- and second-order shape derivatives are implemented and tested in classes of problems for which the classical solutions exist and can be explicitly calculated from the optimality conditions. A proposed quasi-Newton method offers a better ascent vector compared to gradient methods, reaching the optimal design in half as many steps. The method applies well also for multiple state problems.","PeriodicalId":124811,"journal":{"name":"Optimization Methods and Software","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123574855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-05-12DOI: 10.1080/10556788.2022.2060970
M. S. Shahraki, H. Mansouri, A. Delavarkhalafi
In this paper, we propose a new predictor–corrector infeasible-interior-point algorithm for symmetric cone programming. Each iterate always follows the usual wide neighbourhood , it does not necessarily stay within it but must stay within the wider neighbourhood . We prove that, besides the predictor step, each corrector step also reduces the duality gap by a rate of , where r is the rank of the associated Euclidean Jordan algebra. Moreover, we improve the theoretical complexity bound of an infeasible-interior-point method. Some numerical results are provided as well.
{"title":"A wide neighbourhood predictor–corrector infeasible-interior-point algorithm for symmetric cone programming","authors":"M. S. Shahraki, H. Mansouri, A. Delavarkhalafi","doi":"10.1080/10556788.2022.2060970","DOIUrl":"https://doi.org/10.1080/10556788.2022.2060970","url":null,"abstract":"In this paper, we propose a new predictor–corrector infeasible-interior-point algorithm for symmetric cone programming. Each iterate always follows the usual wide neighbourhood , it does not necessarily stay within it but must stay within the wider neighbourhood . We prove that, besides the predictor step, each corrector step also reduces the duality gap by a rate of , where r is the rank of the associated Euclidean Jordan algebra. Moreover, we improve the theoretical complexity bound of an infeasible-interior-point method. Some numerical results are provided as well.","PeriodicalId":124811,"journal":{"name":"Optimization Methods and Software","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123251401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}