Pub Date : 1998-01-01DOI: 10.1080/10556789808805721
J. Stoer, Martin Wechs
Recently the authors of this paper and S. Mizuno described a class of infeasible-interiorpoint methods for solving linear complementarity problems that are sufficient in the sense of R.W. Cottle, J.-S. Pang and V. Venkateswaran (1989) Sufficient matrices and the linear complementarity problemLinear Algebra AppL 114/115,231-249. It was shown that these methods converge superlinearly with an arbitrarily high order even for degenerate problems or problems without strictly complementary solution. In this paper the complexity of these methods is investigated. It is shown that all these methods, if started appropriately, need predictor-corrector steps to find an e-solution, and only steps, if the problem has strictly interior points. HereK is the sufficiency parameter of the complementarity problem.
{"title":"The complexity of high-order predictor-corrector methods for solving sufficient linear complementarity problems","authors":"J. Stoer, Martin Wechs","doi":"10.1080/10556789808805721","DOIUrl":"https://doi.org/10.1080/10556789808805721","url":null,"abstract":"Recently the authors of this paper and S. Mizuno described a class of infeasible-interiorpoint methods for solving linear complementarity problems that are sufficient in the sense of R.W. Cottle, J.-S. Pang and V. Venkateswaran (1989) Sufficient matrices and the linear complementarity problemLinear Algebra AppL 114/115,231-249. It was shown that these methods converge superlinearly with an arbitrarily high order even for degenerate problems or problems without strictly complementary solution. In this paper the complexity of these methods is investigated. It is shown that all these methods, if started appropriately, need predictor-corrector steps to find an e-solution, and only steps, if the problem has strictly interior points. HereK is the sufficiency parameter of the complementarity problem.","PeriodicalId":54673,"journal":{"name":"Optimization Methods & Software","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"1998-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84844864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1998-01-01DOI: 10.1080/10556789808805700
A. Hossain, T. Steihaug
Efficient estimation of large sparse Jacobian matrices has been studied extensively in the last couple of years. It has been observed that the estimation of Jacobian matrix can be posed as a graph coloring problem. Elements of the matrix are estimated by taking divided difference in several directions corresponding to a group of structurally independent columns. Another possibility is to obtain the nonzero elements by means of the so called Automatic differentiation, which gives the estimates free of truncation error that one encounters in a divided difference scheme. In this paper we show that it is possible to exploit sparsity both in columns and rows by employing the forward and the reverse mode of Automatic differentiation. A graph-theoretic characterization of the problem is given.
{"title":"Computing a sparse Jacobian matrix by rows and columns","authors":"A. Hossain, T. Steihaug","doi":"10.1080/10556789808805700","DOIUrl":"https://doi.org/10.1080/10556789808805700","url":null,"abstract":"Efficient estimation of large sparse Jacobian matrices has been studied extensively in the last couple of years. It has been observed that the estimation of Jacobian matrix can be posed as a graph coloring problem. Elements of the matrix are estimated by taking divided difference in several directions corresponding to a group of structurally independent columns. Another possibility is to obtain the nonzero elements by means of the so called Automatic differentiation, which gives the estimates free of truncation error that one encounters in a divided difference scheme. In this paper we show that it is possible to exploit sparsity both in columns and rows by employing the forward and the reverse mode of Automatic differentiation. A graph-theoretic characterization of the problem is given.","PeriodicalId":54673,"journal":{"name":"Optimization Methods & Software","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"1998-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87239737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1998-01-01DOI: 10.1080/10556789808805703
M. Celani, R. Cerulli, M. Gaudioso, Y. Sergeyev
We describe a new dual descent method for a pure 0— location problem known as the capacitated concentrator location problem. The multiplier adjustment technique presented is aimed to find an upper bound in a Lagrangean relaxation context permitting both to decrease and to increase multipliers in the course of the search in contrast with methods where that ones are monotonically updated.
{"title":"A multiplier adjustment technique for the capacitated concentrator location problem","authors":"M. Celani, R. Cerulli, M. Gaudioso, Y. Sergeyev","doi":"10.1080/10556789808805703","DOIUrl":"https://doi.org/10.1080/10556789808805703","url":null,"abstract":"We describe a new dual descent method for a pure 0— location problem known as the capacitated concentrator location problem. The multiplier adjustment technique presented is aimed to find an upper bound in a Lagrangean relaxation context permitting both to decrease and to increase multipliers in the course of the search in contrast with methods where that ones are monotonically updated.","PeriodicalId":54673,"journal":{"name":"Optimization Methods & Software","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"1998-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73470597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1998-01-01DOI: 10.1080/10556789808805676
C. G. Broyden
A proof is given of Farkas's lemma based on a new theorem pertaining to orthogodal matrices. It is claimed that this theorem is slightly more general than Tucker's theorem, which concerns skew-symmetric matrices and which may itself be derived simply from tne new theorem. Farkas's lemma and other theorems of the alternative then follow trivially from Tucker's theorem
{"title":"A simple algebraic proof of Farkas's lemma and related theorems","authors":"C. G. Broyden","doi":"10.1080/10556789808805676","DOIUrl":"https://doi.org/10.1080/10556789808805676","url":null,"abstract":"A proof is given of Farkas's lemma based on a new theorem pertaining to orthogodal matrices. It is claimed that this theorem is slightly more general than Tucker's theorem, which concerns skew-symmetric matrices and which may itself be derived simply from tne new theorem. Farkas's lemma and other theorems of the alternative then follow trivially from Tucker's theorem","PeriodicalId":54673,"journal":{"name":"Optimization Methods & Software","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"1998-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89743214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1998-01-01DOI: 10.1080/10556789808805701
J. Eriksson, M. Gulliksson, Per Lindström, P. Wedin
We describe regularization tools for training large-scale artificial feed-forward neural networks. We propose algorithms that explicitly use a sequence of Tikhonov regularized nonlinear least squar ...
{"title":"Regularization tools for training large feed-forward neural networks using automatic differentiation ∗","authors":"J. Eriksson, M. Gulliksson, Per Lindström, P. Wedin","doi":"10.1080/10556789808805701","DOIUrl":"https://doi.org/10.1080/10556789808805701","url":null,"abstract":"We describe regularization tools for training large-scale artificial feed-forward neural networks. We propose algorithms that explicitly use a sequence of Tikhonov regularized nonlinear least squar ...","PeriodicalId":54673,"journal":{"name":"Optimization Methods & Software","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"1998-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85573825","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1998-01-01DOI: 10.1080/10556789808805689
C. Mészáros
Interior point methods, especially the algorithms for linear programming problems are sensitive if there are unconstrained (free) variables in the problem. While replacing a free variable by two nonnegative ones may cause numerical instabilities, the implicit handling results in a semidefinite scaling matrix at each interior point iteration. In the paper we investigate the effects if the scaling matrix is regularized. Our analysis will prove that the effect of the regularization can be easily monitored and corrected if necessary. We describe the regularization scheme mainly for the efficient handling of free variables, but a similar analysis can be made for the case, when the small scaling factors are raised to larger values to improve the numerical stability of the systems that define the searcn direction. We will show the superiority of our approach over the variable replacement method on a set of test problems arising from water management application
{"title":"On free variables in interior point methods","authors":"C. Mészáros","doi":"10.1080/10556789808805689","DOIUrl":"https://doi.org/10.1080/10556789808805689","url":null,"abstract":"Interior point methods, especially the algorithms for linear programming problems are sensitive if there are unconstrained (free) variables in the problem. While replacing a free variable by two nonnegative ones may cause numerical instabilities, the implicit handling results in a semidefinite scaling matrix at each interior point iteration. In the paper we investigate the effects if the scaling matrix is regularized. Our analysis will prove that the effect of the regularization can be easily monitored and corrected if necessary. We describe the regularization scheme mainly for the efficient handling of free variables, but a similar analysis can be made for the case, when the small scaling factors are raised to larger values to improve the numerical stability of the systems that define the searcn direction. We will show the superiority of our approach over the variable replacement method on a set of test problems arising from water management application","PeriodicalId":54673,"journal":{"name":"Optimization Methods & Software","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"1998-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89479856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1998-01-01DOI: 10.1080/10556789808805690
Y. Nesterov
In this paper we consider the semidefinite relaxation of some global optimization problems. We prove that in some cases this relaxation provides us with a constant relative accuracy estimate for the exact solution.
{"title":"Semidefinite relaxation and nonconvex quadratic optimization","authors":"Y. Nesterov","doi":"10.1080/10556789808805690","DOIUrl":"https://doi.org/10.1080/10556789808805690","url":null,"abstract":"In this paper we consider the semidefinite relaxation of some global optimization problems. We prove that in some cases this relaxation provides us with a constant relative accuracy estimate for the exact solution.","PeriodicalId":54673,"journal":{"name":"Optimization Methods & Software","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"1998-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78353392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1998-01-01DOI: 10.1080/10556789808805723
Hiroshi Yamashita
This paper proposes a primal-dual interior point method for solving general nonlinearly constrained optimization problems. The method is based on solving the Barrier Karush-Kuhn-Tucker conditions for optimality by the Newton method. To globalize the iteration we introduce the Barrier-penalty function and the optimality condition for minimizing this function. Our basic iteration is the Newton iteration for solving the optimality conditions with respect to the Barrier-penalty function which coincides with the Newton iteration for the Barrier Karush-Kuhn-Tucker conditions if the penalty parameter is sufficiently large. It is proved that the method is globally convergent from an arbitrary initial point that strictly satisfies the bounds on the variables. Implementations of the given algorithm are done for small dense nonlinear programs. The method solves all the problems in Hock and Schittkowski's textbook efficiently. Thus it is shown that the method given in this paper possesses a good theoretical convergen...
{"title":"A globally convergent primal-dual interior point method for constrained optimization","authors":"Hiroshi Yamashita","doi":"10.1080/10556789808805723","DOIUrl":"https://doi.org/10.1080/10556789808805723","url":null,"abstract":"This paper proposes a primal-dual interior point method for solving general nonlinearly constrained optimization problems. The method is based on solving the Barrier Karush-Kuhn-Tucker conditions for optimality by the Newton method. To globalize the iteration we introduce the Barrier-penalty function and the optimality condition for minimizing this function. Our basic iteration is the Newton iteration for solving the optimality conditions with respect to the Barrier-penalty function which coincides with the Newton iteration for the Barrier Karush-Kuhn-Tucker conditions if the penalty parameter is sufficiently large. It is proved that the method is globally convergent from an arbitrary initial point that strictly satisfies the bounds on the variables. Implementations of the given algorithm are done for small dense nonlinear programs. The method solves all the problems in Hock and Schittkowski's textbook efficiently. Thus it is shown that the method given in this paper possesses a good theoretical convergen...","PeriodicalId":54673,"journal":{"name":"Optimization Methods & Software","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"1998-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89844984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1998-01-01DOI: 10.1080/10556789808805702
N. Deng, Z. Chen
As a continuation work following [4] and [5], a new ABS-type algorithm for a nonlinear system of equations is proposed. A major iteration of this algorithm requires n component evaluations and only one gradient evaluation. We prove that the algorithm is superlinearly convergent with R-order at least τ n , where τ n is the unique positive root of τn −τn−1 −1=0. It is shown that the new algorithm is usually more efficient than the methods of Newton, Brown and Brent, and the ABS-type algorithms in [1], [4] and [5], in the sense of some standard efficiency measure.
{"title":"A new nonlinear ABS-type algorithm and its efficiency analysis ∗","authors":"N. Deng, Z. Chen","doi":"10.1080/10556789808805702","DOIUrl":"https://doi.org/10.1080/10556789808805702","url":null,"abstract":"As a continuation work following [4] and [5], a new ABS-type algorithm for a nonlinear system of equations is proposed. A major iteration of this algorithm requires n component evaluations and only one gradient evaluation. We prove that the algorithm is superlinearly convergent with R-order at least τ n , where τ n is the unique positive root of τn −τn−1 −1=0. It is shown that the new algorithm is usually more efficient than the methods of Newton, Brown and Brent, and the ABS-type algorithms in [1], [4] and [5], in the sense of some standard efficiency measure.","PeriodicalId":54673,"journal":{"name":"Optimization Methods & Software","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"1998-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/10556789808805702","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72526760","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1998-01-01DOI: 10.1080/10556789808805677
L. Luksan, J. Vlček
This paper is devoted to globally convergent Armijo-type descent methods for solving large sparse systems of nonlinear equations. These methods include the discrete Newtcin method and a broad class of Newton-like methods based on various approximations of the Jacobian matrix. We propose a general theory of global convergence together with a robust algorithm including a special restarting strategy. This algorithm is based cfn the preconditioned smoothed CGS method for solving nonsymmetric systems of linejtr equations. After reviewing 12 particular Newton-like methods, we propose results of extensive computational experiments. These results demonstrate high efficiency of tip proposed algorithm
{"title":"Computational experience with globally convergent descent methods for large sparse systems of nonlinear equations","authors":"L. Luksan, J. Vlček","doi":"10.1080/10556789808805677","DOIUrl":"https://doi.org/10.1080/10556789808805677","url":null,"abstract":"This paper is devoted to globally convergent Armijo-type descent methods for solving large sparse systems of nonlinear equations. These methods include the discrete Newtcin method and a broad class of Newton-like methods based on various approximations of the Jacobian matrix. We propose a general theory of global convergence together with a robust algorithm including a special restarting strategy. This algorithm is based cfn the preconditioned smoothed CGS method for solving nonsymmetric systems of linejtr equations. After reviewing 12 particular Newton-like methods, we propose results of extensive computational experiments. These results demonstrate high efficiency of tip proposed algorithm","PeriodicalId":54673,"journal":{"name":"Optimization Methods & Software","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"1998-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84679386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}