Pub Date : 2022-01-01Epub Date: 2022-05-24DOI: 10.1016/j.ejco.2022.100029
Jeanette Schmidt, Stefan Irnich
For a given graph with a vertex set that is partitioned into clusters, the generalized traveling salesman problem (GTSP) is the problem of finding a cost-minimal cycle that contains exactly one vertex of every cluster. We introduce three new GTSP neighborhoods that allow the simultaneous permutation of the sequence of the clusters and the selection of vertices from each cluster. The three neighborhoods and some known neighborhoods from the literature are combined into an effective iterated local search (ILS) for the GTSP. The ILS performs a straightforward random neighborhood selection within the local search and applies an ordinary record-to-record ILS acceptance criterion. The computational experiments on four symmetric standard GTSP libraries show that, with some purposeful refinements, the ILS can compete with state-of-the-art GTSP algorithms.
{"title":"New neighborhoods and an iterated local search algorithm for the generalized traveling salesman problem","authors":"Jeanette Schmidt, Stefan Irnich","doi":"10.1016/j.ejco.2022.100029","DOIUrl":"10.1016/j.ejco.2022.100029","url":null,"abstract":"<div><p>For a given graph with a vertex set that is partitioned into clusters, the generalized traveling salesman problem (GTSP) is the problem of finding a cost-minimal cycle that contains exactly one vertex of every cluster. We introduce three new GTSP neighborhoods that allow the simultaneous permutation of the sequence of the clusters and the selection of vertices from each cluster. The three neighborhoods and some known neighborhoods from the literature are combined into an effective iterated local search (ILS) for the GTSP. The ILS performs a straightforward random neighborhood selection within the local search and applies an ordinary record-to-record ILS acceptance criterion. The computational experiments on four symmetric standard GTSP libraries show that, with some purposeful refinements, the ILS can compete with state-of-the-art GTSP algorithms.</p></div>","PeriodicalId":51880,"journal":{"name":"EURO Journal on Computational Optimization","volume":"10 ","pages":"Article 100029"},"PeriodicalIF":2.4,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2192440622000053/pdfft?md5=f5688517686dac40484c0d65534f3440&pid=1-s2.0-S2192440622000053-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130152340","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-01-01Epub Date: 2022-10-12DOI: 10.1016/j.ejco.2022.100047
Aritra Dutta , El Houcine Bergou , Yunming Xiao , Marco Canini , Peter Richtárik
Optimization acceleration techniques such as momentum play a key role in state-of-the-art machine learning algorithms. Recently, generic vector sequence extrapolation techniques, such as regularized nonlinear acceleration (RNA) of Scieur et al. [22], were proposed and shown to accelerate fixed point iterations. In contrast to RNA which computes extrapolation coefficients by (approximately) setting the gradient of the objective function to zero at the extrapolated point, we propose a more direct approach, which we call direct nonlinear acceleration (DNA). In DNA, we aim to minimize (an approximation of) the function value at the extrapolated point instead. We adopt a regularized approach with regularizers designed to prevent the model from entering a region in which the functional approximation is less precise. While the computational cost of DNA is comparable to that of RNA, our direct approach significantly outperforms RNA on both synthetic and real-world datasets. While the focus of this paper is on convex problems, we obtain very encouraging results in accelerating the training of neural networks.
{"title":"Direct nonlinear acceleration","authors":"Aritra Dutta , El Houcine Bergou , Yunming Xiao , Marco Canini , Peter Richtárik","doi":"10.1016/j.ejco.2022.100047","DOIUrl":"10.1016/j.ejco.2022.100047","url":null,"abstract":"<div><p>Optimization acceleration techniques such as momentum play a key role in state-of-the-art machine learning algorithms. Recently, generic vector sequence extrapolation techniques, such as regularized nonlinear acceleration (RNA) of Scieur et al. <span>[22]</span>, were proposed and shown to accelerate fixed point iterations. In contrast to RNA which computes extrapolation coefficients by (approximately) setting the gradient of the objective function to zero at the extrapolated point, we propose a more direct approach, which we call <em>direct nonlinear acceleration (DNA)</em>. In DNA, we aim to minimize (an approximation of) the function value at the extrapolated point instead. We adopt a regularized approach with regularizers designed to prevent the model from entering a region in which the functional approximation is less precise. While the computational cost of DNA is comparable to that of RNA, our direct approach significantly outperforms RNA on both synthetic and real-world datasets. While the focus of this paper is on convex problems, we obtain very encouraging results in accelerating the training of neural networks.</p></div>","PeriodicalId":51880,"journal":{"name":"EURO Journal on Computational Optimization","volume":"10 ","pages":"Article 100047"},"PeriodicalIF":2.4,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2192440622000235/pdfft?md5=1af83969ee833bb0a8954f808f6ca4ee&pid=1-s2.0-S2192440622000235-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131687887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
EUROPT, the Continuous Optimization working group of EURO, celebrated its 20 years of activity in 2020. We trace the history of this working group by presenting the major milestones that have led to its current structure and organization and its major trademarks, such as the annual EUROPT workshop and the EUROPT Fellow recognition.
{"title":"Twenty years of EUROPT, the EURO working group on Continuous Optimization","authors":"Sonia Cafieri , Tatiana Tchemisova , Gerhard-Wilhelm Weber","doi":"10.1016/j.ejco.2022.100039","DOIUrl":"10.1016/j.ejco.2022.100039","url":null,"abstract":"<div><p>EUROPT, the Continuous Optimization working group of EURO, celebrated its 20 years of activity in 2020. We trace the history of this working group by presenting the major milestones that have led to its current structure and organization and its major trademarks, such as the annual EUROPT workshop and the EUROPT Fellow recognition.</p></div>","PeriodicalId":51880,"journal":{"name":"EURO Journal on Computational Optimization","volume":"10 ","pages":"Article 100039"},"PeriodicalIF":2.4,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2192440622000156/pdfft?md5=d70136dd19d5184ddafe323e89eb1929&pid=1-s2.0-S2192440622000156-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129955865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-01Epub Date: 2021-10-20DOI: 10.1016/j.ejco.2021.100015
Pavel Dvurechensky , Shimrit Shtern , Mathias Staudigl
First-order methods for solving convex optimization problems have been at the forefront of mathematical optimization in the last 20 years. The rapid development of this important class of algorithms is motivated by the success stories reported in various applications, including most importantly machine learning, signal processing, imaging and control theory. First-order methods have the potential to provide low accuracy solutions at low computational complexity which makes them an attractive set of tools in large-scale optimization problems. In this survey, we cover a number of key developments in gradient-based optimization methods. This includes non-Euclidean extensions of the classical proximal gradient method, and its accelerated versions. Additionally we survey recent developments within the class of projection-free methods, and proximal versions of primal-dual schemes. We give complete proofs for various key results, and highlight the unifying aspects of several optimization algorithms.
{"title":"First-Order Methods for Convex Optimization","authors":"Pavel Dvurechensky , Shimrit Shtern , Mathias Staudigl","doi":"10.1016/j.ejco.2021.100015","DOIUrl":"10.1016/j.ejco.2021.100015","url":null,"abstract":"<div><p>First-order methods for solving convex optimization problems have been at the forefront of mathematical optimization in the last 20 years. The rapid development of this important class of algorithms is motivated by the success stories reported in various applications, including most importantly machine learning, signal processing, imaging and control theory. First-order methods have the potential to provide low accuracy solutions at low computational complexity which makes them an attractive set of tools in large-scale optimization problems. In this survey, we cover a number of key developments in gradient-based optimization methods. This includes non-Euclidean extensions of the classical proximal gradient method, and its accelerated versions. Additionally we survey recent developments within the class of projection-free methods, and proximal versions of primal-dual schemes. We give complete proofs for various key results, and highlight the unifying aspects of several optimization algorithms.</p></div>","PeriodicalId":51880,"journal":{"name":"EURO Journal on Computational Optimization","volume":"9 ","pages":"Article 100015"},"PeriodicalIF":2.4,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2192440621001428/pdfft?md5=19763cbf839252d3f78a91ae92c0f36f&pid=1-s2.0-S2192440621001428-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128767137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-01Epub Date: 2021-12-01DOI: 10.1016/j.ejco.2021.100018
Gilbert Laporte
Ailsa H. Land, who received the 2021 EURO Gold Medal, made some important contributions to the study of the Traveling Salesman Problem, which were published in a 1955 journal article and in a 1979 working paper. The purpose of this introductory note is to describe these contributions.
{"title":"Some contributions of Ailsa H. Land to the study of the traveling salesman problem","authors":"Gilbert Laporte","doi":"10.1016/j.ejco.2021.100018","DOIUrl":"10.1016/j.ejco.2021.100018","url":null,"abstract":"<div><p>Ailsa H. Land, who received the 2021 EURO Gold Medal, made some important contributions to the study of the Traveling Salesman Problem, which were published in a 1955 journal article and in a 1979 working paper. The purpose of this introductory note is to describe these contributions.</p></div>","PeriodicalId":51880,"journal":{"name":"EURO Journal on Computational Optimization","volume":"9 ","pages":"Article 100018"},"PeriodicalIF":2.4,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2192440621001453/pdfft?md5=44e328d6324bf9179a1e115333a0d1cb&pid=1-s2.0-S2192440621001453-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"54300215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-01Epub Date: 2021-11-23DOI: 10.1016/j.ejco.2021.100021
Mirjam Dür , Franz Rendl
A conic optimization problem is a problem involving a constraint that the optimization variable be in some closed convex cone. Prominent examples are linear programs (LP), second order cone programs (SOCP), semidefinite problems (SDP), and copositive problems. We survey recent progress made in this area. In particular, we highlight the connections between nonconvex quadratic problems, binary quadratic problems, and copositive optimization. We review how tight bounds can be obtained by relaxing the copositivity constraint to semidefiniteness, and we discuss the effect that different modelling techniques have on the quality of the bounds. We also provide some new techniques for lifting linear constraints and show how these can be used for stable set and coloring relaxations.
{"title":"Conic optimization: A survey with special focus on copositive optimization and binary quadratic problems","authors":"Mirjam Dür , Franz Rendl","doi":"10.1016/j.ejco.2021.100021","DOIUrl":"https://doi.org/10.1016/j.ejco.2021.100021","url":null,"abstract":"<div><p>A conic optimization problem is a problem involving a constraint that the optimization variable be in some closed convex cone. Prominent examples are linear programs (LP), second order cone programs (SOCP), semidefinite problems (SDP), and copositive problems. We survey recent progress made in this area. In particular, we highlight the connections between nonconvex quadratic problems, binary quadratic problems, and copositive optimization. We review how tight bounds can be obtained by relaxing the copositivity constraint to semidefiniteness, and we discuss the effect that different modelling techniques have on the quality of the bounds. We also provide some new techniques for lifting linear constraints and show how these can be used for stable set and coloring relaxations.</p></div>","PeriodicalId":51880,"journal":{"name":"EURO Journal on Computational Optimization","volume":"9 ","pages":"Article 100021"},"PeriodicalIF":2.4,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2192440621001489/pdfft?md5=2fd9af7537cd98f646e5236b30d3d05f&pid=1-s2.0-S2192440621001489-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91979793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this manuscript, we consider smooth multi-objective optimization problems with convex constraints. We propose an extension of a multi-objective augmented Lagrangian Method from recent literature. The new algorithm is specifically designed to handle sets of points and produce good approximations of the whole Pareto front, as opposed to the original one which converges to a single solution. We prove properties of global convergence to Pareto stationarity for the sequences of points generated by our procedure. We then compare the performance of the proposed method with those of the main state-of-the-art algorithms available for the considered class of problems. The results of our experiments show the effectiveness and general superiority w.r.t. competitors of our proposed approach.
{"title":"Pareto front approximation through a multi-objective augmented Lagrangian method","authors":"Guido Cocchi , Matteo Lapucci , Pierluigi Mansueto","doi":"10.1016/j.ejco.2021.100008","DOIUrl":"https://doi.org/10.1016/j.ejco.2021.100008","url":null,"abstract":"<div><p>In this manuscript, we consider smooth multi-objective optimization problems with convex constraints. We propose an extension of a multi-objective augmented Lagrangian Method from recent literature. The new algorithm is specifically designed to handle sets of points and produce good approximations of the whole Pareto front, as opposed to the original one which converges to a single solution. We prove properties of global convergence to Pareto stationarity for the sequences of points generated by our procedure. We then compare the performance of the proposed method with those of the main state-of-the-art algorithms available for the considered class of problems. The results of our experiments show the effectiveness and general superiority w.r.t. competitors of our proposed approach.</p></div>","PeriodicalId":51880,"journal":{"name":"EURO Journal on Computational Optimization","volume":"9 ","pages":"Article 100008"},"PeriodicalIF":2.4,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.ejco.2021.100008","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91979837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-01Epub Date: 2021-01-11DOI: 10.1016/j.ejco.2020.100002
Heiner Ackermann , Erik Diessel , Sven O. Krumke
We consider an adjustable robust optimization problem arising in the area of supply chains: given sets of suppliers and demand nodes, we wish to find a flow that is robust with respect to failures of the suppliers. The objective is to determine a flow that minimizes the amount of shortage in the worst-case after an optimal mitigation has been performed. An optimal mitigation is an additional flow in the residual network that mitigates as much shortage at the demand sites as possible. For this problem we give a mathematical formulation, yielding a robust flow problem with three stages where the mitigation of the last stage can be chosen adaptively depending on the scenario. We show that already evaluating the robustness of a solution is -hard. For optimizing with respect to this -hard objective function, we compare three algorithms. Namely an algorithm based on iterative cut generation that solves medium-sized instances efficiently, a simple Outer Linearization Algorithm and a Scenario Enumeration algorithm. We illustrate the performance by numerical experiments. The results show that this instance of fully adjustable robust optimization problems can be solved exactly with a reasonable performance. We also describe possible extensions to the model and the algorithm.
{"title":"Robust flows with adaptive mitigation","authors":"Heiner Ackermann , Erik Diessel , Sven O. Krumke","doi":"10.1016/j.ejco.2020.100002","DOIUrl":"https://doi.org/10.1016/j.ejco.2020.100002","url":null,"abstract":"<div><p>We consider an adjustable robust optimization problem arising in the area of supply chains: given sets of suppliers and demand nodes, we wish to find a flow that is robust with respect to failures of the suppliers. The objective is to determine a flow that minimizes the amount of shortage in the worst-case after an optimal mitigation has been performed. An optimal mitigation is an additional flow in the residual network that mitigates as much shortage at the demand sites as possible. For this problem we give a mathematical formulation, yielding a robust flow problem with three stages where the mitigation of the last stage can be chosen adaptively depending on the scenario. We show that already evaluating the robustness of a solution is <span><math><mi>NP</mi></math></span>-hard. For optimizing with respect to this <span><math><mi>NP</mi></math></span>-hard objective function, we compare three algorithms. Namely an algorithm based on iterative cut generation that solves medium-sized instances efficiently, a simple Outer Linearization Algorithm and a Scenario Enumeration algorithm. We illustrate the performance by numerical experiments. The results show that this instance of fully adjustable robust optimization problems can be solved exactly with a reasonable performance. We also describe possible extensions to the model and the algorithm.</p></div>","PeriodicalId":51880,"journal":{"name":"EURO Journal on Computational Optimization","volume":"9 ","pages":"Article 100002"},"PeriodicalIF":2.4,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.ejco.2020.100002","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91979864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-01Epub Date: 2021-12-01DOI: 10.1016/j.ejco.2021.100017
A. Land
A simplex-based FORTRAN code, working entirely in integer arithmetic, has been developed for the exact solution of travelling-salesman problems. The code adds tour-barring constraints as they are found to be violated. It deals with fractional solutions by adding two-matching constraints and as a last resort by ‘Gomory’ cutting plane constraints of the Method of Integer Forms. Most of the calculations are carried out on only a subset of the variables, with only occasional passes through the whole set of possible variables. Computational experience on some 100-city problems is reported.
{"title":"The Solution of some 100-city Travelling Salesman Problems","authors":"A. Land","doi":"10.1016/j.ejco.2021.100017","DOIUrl":"https://doi.org/10.1016/j.ejco.2021.100017","url":null,"abstract":"<div><p>A simplex-based <span>FORTRAN</span> code, working entirely in integer arithmetic, has been developed for the exact solution of travelling-salesman problems. The code adds tour-barring constraints as they are found to be violated. It deals with fractional solutions by adding two-matching constraints and as a last resort by ‘Gomory’ cutting plane constraints of the Method of Integer Forms. Most of the calculations are carried out on only a subset of the variables, with only occasional passes through the whole set of possible variables. Computational experience on some 100-city problems is reported.</p></div>","PeriodicalId":51880,"journal":{"name":"EURO Journal on Computational Optimization","volume":"9 ","pages":"Article 100017"},"PeriodicalIF":2.4,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2192440621001441/pdfft?md5=a52a3c9bdab27e0da82eb05946357c1d&pid=1-s2.0-S2192440621001441-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"92106932","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-01Epub Date: 2021-11-08DOI: 10.1016/j.ejco.2021.100019
V. Jeyakumar, G. Li, D. Woolnough
Adjustable robust optimization allows for some variables to depend upon the uncertain data after its realization. However, the uncertainty is often not revealed exactly. Incorporating inexactness of the revealed data in the construction of ellipsoidal uncertainty sets, we present an exact second-order cone program reformulation for robust linear optimization problems with inexact data and quadratically adjustable variables. This is achieved by establishing a generalization of the celebrated S-lemma for a separable quadratic inequality system with at most one non-homogeneous function. It allows us to reformulate the resulting separable quadratic constraints over an intersection of two ellipsoids in terms of second-order cone constraints. We illustrate our results via numerical experiments on adjustable robust lot-sizing problems with demand uncertainty, showing improvements over corresponding problems with affinely adjustable variables as well as with exactly revealed data.
{"title":"Quadratically adjustable robust linear optimization with inexact data via generalized S-lemma: Exact second-order cone program reformulations","authors":"V. Jeyakumar, G. Li, D. Woolnough","doi":"10.1016/j.ejco.2021.100019","DOIUrl":"https://doi.org/10.1016/j.ejco.2021.100019","url":null,"abstract":"<div><p>Adjustable robust optimization allows for some variables to depend upon the uncertain data after its realization. However, the uncertainty is often not revealed exactly. Incorporating inexactness of the revealed data in the construction of ellipsoidal uncertainty sets, we present an exact second-order cone program reformulation for robust linear optimization problems with inexact data and quadratically adjustable variables. This is achieved by establishing a generalization of the celebrated S-lemma for a separable quadratic inequality system with at most one non-homogeneous function. It allows us to reformulate the resulting separable quadratic constraints over an intersection of two ellipsoids in terms of second-order cone constraints. We illustrate our results via numerical experiments on adjustable robust lot-sizing problems with demand uncertainty, showing improvements over corresponding problems with affinely adjustable variables as well as with exactly revealed data.</p></div>","PeriodicalId":51880,"journal":{"name":"EURO Journal on Computational Optimization","volume":"9 ","pages":"Article 100019"},"PeriodicalIF":2.4,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2192440621001465/pdfft?md5=a4dc90a6e60a07a7b22d11984e1bb230&pid=1-s2.0-S2192440621001465-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91979794","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}