Pub Date : 2024-05-18DOI: 10.1007/s11067-024-09626-2
Shuaiming Chen, Ximing Ji, Haipeng Shao
Despite great progress in enhancing the efficiency of public transport, one still cannot seamlessly incorporate structural characteristics into existing algorithms. Moreover, comprehensively exploring the structure of urban bus networks through a single-view modelling approach is limited. In this research, a multi-view graph learning algorithm (MvGL) is proposed to aggregate community information from multiple views of urban bus system. First, by developing a single-view graph encoder module, latent community relationships can be captured during learning node embeddings. Second, inspired by attention mechanism, a multi-view graph encoder module is designed to fuse node embeddings in different views, aims to perceive more community information of urban bus network comprehensively. Then, the community assignment can be updated by using a differentiable clustering layer. Finally, a well-defined objective function, which integrates node level, community level and graph level, can help improve the quality of community detection. Experimental results demonstrated that MvGL can effectively aggregate community information from different views and further improve the quality of community detection. This research contributes to the understanding the structural characteristics of public transport networks and facilitates their operational efficiency.
{"title":"Revealing the Community Structure of Urban Bus Networks: a Multi-view Graph Learning Approach","authors":"Shuaiming Chen, Ximing Ji, Haipeng Shao","doi":"10.1007/s11067-024-09626-2","DOIUrl":"https://doi.org/10.1007/s11067-024-09626-2","url":null,"abstract":"<p>Despite great progress in enhancing the efficiency of public transport, one still cannot seamlessly incorporate structural characteristics into existing algorithms. Moreover, comprehensively exploring the structure of urban bus networks through a single-view modelling approach is limited. In this research, a multi-view graph learning algorithm (MvGL) is proposed to aggregate community information from multiple views of urban bus system. First, by developing a single-view graph encoder module, latent community relationships can be captured during learning node embeddings. Second, inspired by attention mechanism, a multi-view graph encoder module is designed to fuse node embeddings in different views, aims to perceive more community information of urban bus network comprehensively. Then, the community assignment can be updated by using a differentiable clustering layer. Finally, a well-defined objective function, which integrates node level, community level and graph level, can help improve the quality of community detection. Experimental results demonstrated that MvGL can effectively aggregate community information from different views and further improve the quality of community detection. This research contributes to the understanding the structural characteristics of public transport networks and facilitates their operational efficiency.</p>","PeriodicalId":501141,"journal":{"name":"Networks and Spatial Economics","volume":"131 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141059823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-07DOI: 10.1007/s11067-024-09627-1
Pawel Kalczynski, Zvi Drezner
The most basic location problem is the Weber problem, that is a basis to many advanced location models. It is finding the location of a facility which minimizes the sum of weighted distances to a set of demand points. Solution approaches have convergence issues when the optimal solution is at a demand point because the derivatives of the objective function do not exist on a demand point and are discontinuous near it. In this paper we investigate the probability that the optimal location is on a demand point, create example problems that may take millions of iterations to converge to the optimal location, and suggest a simple improvement to the Weiszfeld solution algorithm. One would expect that if the number of demand points increases to infinity, the probability that the optimal location is on a demand point converges to 1 because there is no “space" left to locate the facility not on a demand point. Consequently, we may experience convergence issues for relatively large problems. However, it was shown that for randomly generated points in a circle the probability converges to zero, which is counter intuitive. In this paper we further investigate this probability. Another interesting result of our experiments is that FORTRAN is much faster than Python for such simulations. Researchers are advised to apply old fashioned programming languages rather than newer software for simulations of this type.
{"title":"Further Analysis of the Weber Problem","authors":"Pawel Kalczynski, Zvi Drezner","doi":"10.1007/s11067-024-09627-1","DOIUrl":"https://doi.org/10.1007/s11067-024-09627-1","url":null,"abstract":"<p>The most basic location problem is the Weber problem, that is a basis to many advanced location models. It is finding the location of a facility which minimizes the sum of weighted distances to a set of demand points. Solution approaches have convergence issues when the optimal solution is at a demand point because the derivatives of the objective function do not exist on a demand point and are discontinuous near it. In this paper we investigate the probability that the optimal location is on a demand point, create example problems that may take millions of iterations to converge to the optimal location, and suggest a simple improvement to the Weiszfeld solution algorithm. One would expect that if the number of demand points increases to infinity, the probability that the optimal location is on a demand point converges to 1 because there is no “space\" left to locate the facility not on a demand point. Consequently, we may experience convergence issues for relatively large problems. However, it was shown that for randomly generated points in a circle the probability converges to zero, which is counter intuitive. In this paper we further investigate this probability. Another interesting result of our experiments is that FORTRAN is much faster than Python for such simulations. Researchers are advised to apply old fashioned programming languages rather than newer software for simulations of this type.</p>","PeriodicalId":501141,"journal":{"name":"Networks and Spatial Economics","volume":"47 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140890142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-07DOI: 10.1007/s11067-024-09625-3
Malgorzata Miklas-Kalczynska
Existing location models considering multi-purpose shopping behavior limit the number of stops a customer makes to two. We introduce the multi-purpose (MP) competitive facility location model with more than two stops. We locate one or more facilities in a competitive environment, assuming a shopper may stop multiple times during one trip to purchase different complementary goods or services. We show that when some or all trips are multi-purpose, our model captures at least as much market share as the MP models with fewer purposes. Our extensive simulation experiments show that the MP models work best when multiple new facilities are added. As the number of facilities increases, however, the returns diminish due to cannibalization. Also, with significant increases in complexity for each additional stop added, expanding the model beyond three purposes may not be practical.
{"title":"Extensions to Competitive Facility Location with Multi-purpose Trips","authors":"Malgorzata Miklas-Kalczynska","doi":"10.1007/s11067-024-09625-3","DOIUrl":"https://doi.org/10.1007/s11067-024-09625-3","url":null,"abstract":"<p>Existing location models considering multi-purpose shopping behavior limit the number of stops a customer makes to two. We introduce the multi-purpose (MP) competitive facility location model with more than two stops. We locate one or more facilities in a competitive environment, assuming a shopper may stop multiple times during one trip to purchase different complementary goods or services. We show that when some or all trips are multi-purpose, our model captures at least as much market share as the MP models with fewer purposes. Our extensive simulation experiments show that the MP models work best when multiple new facilities are added. As the number of facilities increases, however, the returns diminish due to cannibalization. Also, with significant increases in complexity for each additional stop added, expanding the model beyond three purposes may not be practical.</p>","PeriodicalId":501141,"journal":{"name":"Networks and Spatial Economics","volume":"22 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140888131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-03DOI: 10.1007/s11067-024-09624-4
Pierre Loslever
Study of large rail traffic databases presents formidable challenges for transport system specialists, more particularly while keeping both space and time factors together with the possibility of showing influencing factors related to the users and the transport network environment. To perform such a study, a bibliographic analysis in both statistics and transport revealed that geometrical methods for feature extraction and dimension reduction can be seen as suitable. Since there are several methods/options with, in principle, required input data, this article aims at comparing Principal Component Analysis (PCA) and Correspondence Analysis (CA) for traffic frequency data, both methods being actually used with such data. The procedure stands as follows. First a grand matrix is built where the rows correspond to time windows and the columns to all the possible origin-destination links. Then this large frequency matrix is studied using PCA and CA. The next part of the procedure consists in studying the effects of influencing factors with the possibility of keeping the quantitative scales with PCA or using fuzzy segmentation with CA, the corresponding data being considered as supplementary column points. The procedure is applied on a rail transport network including 10 stations (one corresponding to the airport) and one-hour time windows for 4 months, the available influencing factors being the temperature, rain level and gas price. The comparative analysis shows that CA graphical outputs are more complicated than PCA ones, but reveal more specific results, e.g. the network user behavior related to the airport, while PCA mainly opposes link clusters with low vs. high frequencies. Fuzzy windowing performed using actual and simulated data reduces the loss of information when averaging, e.g. over time, and can show non-linear relational phenomena. The possibility of displaying new traffic data in real time is also considered.
研究大型铁路交通数据库对交通系统专家来说是一项艰巨的挑战,尤其是要同时考虑空间和时间因素,以及显示与用户和交通网络环境有关的影响因素的可能性。为了进行这样的研究,统计和交通领域的文献分析表明,可以采用几何方法进行特征提取和维度缩减。由于有多种方法/选项,原则上都需要输入数据,本文旨在比较交通频率数据的主成分分析法(PCA)和对应分析法(CA),这两种方法实际上都用于此类数据。具体过程如下。首先建立一个大矩阵,其中行对应于时间窗口,列对应于所有可能的出发地-目的地链接。然后使用 PCA 和 CA 对这个大频率矩阵进行研究。程序的下一部分是研究影响因素的作用,可以使用 PCA 保持定量标度,也可以使用 CA 进行模糊分段,相应的数据被视为补充列点。该程序适用于一个铁路交通网络,包括 10 个车站(其中一个与机场相对应)和 4 个月的一小时时间窗口,可用的影响因素为气温、雨量和天然气价格。对比分析表明,CA 图形输出比 PCA 图形输出更复杂,但揭示的结果更具体,例如,与机场相关的网络用户行为,而 PCA 主要反对低频率与高频率的链接集群。使用实际数据和模拟数据进行模糊窗口分析可减少平均化时的信息损失,例如随时间变化的信息损失,并可显示非线性关系现象。此外,还考虑了实时显示新交通数据的可能性。
{"title":"Spatiotemporal Analysis of Traffic Data: Correspondence Analysis with Fuzzified Variables vs. Principal Component Analysis Using Weather and Gas Price as Extra Data","authors":"Pierre Loslever","doi":"10.1007/s11067-024-09624-4","DOIUrl":"https://doi.org/10.1007/s11067-024-09624-4","url":null,"abstract":"<p>Study of large rail traffic databases presents formidable challenges for transport system specialists, more particularly while keeping both space and time factors together with the possibility of showing influencing factors related to the users and the transport network environment. To perform such a study, a bibliographic analysis in both statistics and transport revealed that geometrical methods for feature extraction and dimension reduction can be seen as suitable. Since there are several methods/options with, in principle, required input data, this article aims at comparing Principal Component Analysis (PCA) and Correspondence Analysis (CA) for traffic frequency data, both methods being actually used with such data. The procedure stands as follows. First a grand matrix is built where the rows correspond to time windows and the columns to all the possible origin-destination links. Then this large frequency matrix is studied using PCA and CA. The next part of the procedure consists in studying the effects of influencing factors with the possibility of keeping the quantitative scales with PCA or using fuzzy segmentation with CA, the corresponding data being considered as supplementary column points. The procedure is applied on a rail transport network including 10 stations (one corresponding to the airport) and one-hour time windows for 4 months, the available influencing factors being the temperature, rain level and gas price. The comparative analysis shows that CA graphical outputs are more complicated than PCA ones, but reveal more specific results, e.g. the network user behavior related to the airport, while PCA mainly opposes link clusters with low vs. high frequencies. Fuzzy windowing performed using actual and simulated data reduces the loss of information when averaging, e.g. over time, and can show non-linear relational phenomena. The possibility of displaying new traffic data in real time is also considered.</p>","PeriodicalId":501141,"journal":{"name":"Networks and Spatial Economics","volume":"23 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140888178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-10DOI: 10.1007/s11067-024-09622-6
Seda Hatipoğlu, Hazal Ergül, Ecem Yazıcı
The air transport systems can be represented as a network where airports serve as nodes and flight paths as links. The degree of connections between airports is defined as connectivity in air transport. With the increasing competition environment after liberalization in the air transport sector, the connectivity of airports is becoming an increasingly important issue. Thanks to Türkiye’s advantageous geographical situation, and the hub and spoke system applied to the network structures of the airline companies operating, the city of Istanbul has become a major transfer point. However, considering the increasing demand for air transport, population growth, tourism and trade volume, capacity constraints of airports, and existing and ongoing transport infrastructure investments, it is considered that the establishment of a polycentric network structure in Türkiye will increase the connectivity in air transport. In this study, in order to contribute to the development of the multi-center air transport model; a hub location selection model has been developed considering the factors affecting the connectivity of airports. GAMS software was used for model solution. In order to measure high-speed train accessibility, green airport applications and the effects of the Covid-19 pandemic, different scenarios were produced, and the solution results were evaluated.
{"title":"Selection of Secondary Hub Airport Location Based on Connectivity and Green Airport Solutions","authors":"Seda Hatipoğlu, Hazal Ergül, Ecem Yazıcı","doi":"10.1007/s11067-024-09622-6","DOIUrl":"https://doi.org/10.1007/s11067-024-09622-6","url":null,"abstract":"<p>The air transport systems can be represented as a network where airports serve as nodes and flight paths as links. The degree of connections between airports is defined as connectivity in air transport. With the increasing competition environment after liberalization in the air transport sector, the connectivity of airports is becoming an increasingly important issue. Thanks to Türkiye’s advantageous geographical situation, and the hub and spoke system applied to the network structures of the airline companies operating, the city of Istanbul has become a major transfer point. However, considering the increasing demand for air transport, population growth, tourism and trade volume, capacity constraints of airports, and existing and ongoing transport infrastructure investments, it is considered that the establishment of a polycentric network structure in Türkiye will increase the connectivity in air transport. In this study, in order to contribute to the development of the multi-center air transport model; a hub location selection model has been developed considering the factors affecting the connectivity of airports. GAMS software was used for model solution. In order to measure high-speed train accessibility, green airport applications and the effects of the Covid-19 pandemic, different scenarios were produced, and the solution results were evaluated.</p>","PeriodicalId":501141,"journal":{"name":"Networks and Spatial Economics","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140565963","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-27DOI: 10.1007/s11067-024-09623-5
Dawit Zerom, Zvi Drezner
In this paper we introduce a data analytics approach for specifying the gravity model as applied to competitive facility location. The gravity model is used primarily by marketers to estimate the market share attracted by competing retail facilities. Once the market share is computed, various solution techniques can be applied for finding the best locations for one or more new facilities. In competitive facility location research, various parametrized gravity models have been proposed such as the power and the exponential distance decay specifications. However, parameterized approaches may not be robust to slight data inconsistency and possibly leading to inaccurate market share predictions. As the volume of data available to support managerial decision making is growing rapidly, non-parametric (data-guided) approaches are naturally attractive alternatives as they can mitigate parametric biases. We introduce a unified gravity model that encompasses practically all existing parametric gravity models as special cases. We provide a statistical framework for empirically estimating the proposed gravity models focusing on shopping malls data involving shopping frequency.
{"title":"Data-Guided Gravity Model for Competitive Facility Location","authors":"Dawit Zerom, Zvi Drezner","doi":"10.1007/s11067-024-09623-5","DOIUrl":"https://doi.org/10.1007/s11067-024-09623-5","url":null,"abstract":"<p>In this paper we introduce a data analytics approach for specifying the gravity model as applied to competitive facility location. The gravity model is used primarily by marketers to estimate the market share attracted by competing retail facilities. Once the market share is computed, various solution techniques can be applied for finding the best locations for one or more new facilities. In competitive facility location research, various parametrized gravity models have been proposed such as the power and the exponential distance decay specifications. However, parameterized approaches may not be robust to slight data inconsistency and possibly leading to inaccurate market share predictions. As the volume of data available to support managerial decision making is growing rapidly, non-parametric (data-guided) approaches are naturally attractive alternatives as they can mitigate parametric biases. We introduce a unified gravity model that encompasses practically all existing parametric gravity models as special cases. We provide a statistical framework for empirically estimating the proposed gravity models focusing on shopping malls data involving shopping frequency.</p>","PeriodicalId":501141,"journal":{"name":"Networks and Spatial Economics","volume":"107 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140315838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-20DOI: 10.1007/s11067-024-09621-7
Qi Zhong, Lixin Miao
In this paper, we consider a novel reliability-based network equilibrium problem for mixed traffic flows of human-driven vehicles (HVs) and connected autonomous vehicles (CAVs) with endogenous CAV market penetration and stochastic link capacity degradations. Travelers’ perception errors on travel time and their risk-aversive behaviors on mode choice and path choice are incorporated in the model with a hierarchical choice structure. Due to the differences between HVs and CAVs, the perception errors and the safety margin reserved by risk-averse travelers are assumed to be related to the vehicle type. The path travel time distribution is derived by using the moment-matching method based on the assumption that link capacity follows lognormal distribution and link travel times are correlated. Then, the underlying problem is formulated as an equivalent variational inequality problem. A path-based algorithm embedded with the Monte Carlo simulation-based method is proposed to solve the model. Numerical experiments are conducted to illustrate the features of the model and the computational performance of the solution algorithm.
{"title":"Reliability-Based Mixed Traffic Equilibrium Problem Under Endogenous Market Penetration of Connected Autonomous Vehicles and Uncertainty in Supply","authors":"Qi Zhong, Lixin Miao","doi":"10.1007/s11067-024-09621-7","DOIUrl":"https://doi.org/10.1007/s11067-024-09621-7","url":null,"abstract":"<p>In this paper, we consider a novel reliability-based network equilibrium problem for mixed traffic flows of human-driven vehicles (HVs) and connected autonomous vehicles (CAVs) with endogenous CAV market penetration and stochastic link capacity degradations. Travelers’ perception errors on travel time and their risk-aversive behaviors on mode choice and path choice are incorporated in the model with a hierarchical choice structure. Due to the differences between HVs and CAVs, the perception errors and the safety margin reserved by risk-averse travelers are assumed to be related to the vehicle type. The path travel time distribution is derived by using the moment-matching method based on the assumption that link capacity follows lognormal distribution and link travel times are correlated. Then, the underlying problem is formulated as an equivalent variational inequality problem. A path-based algorithm embedded with the Monte Carlo simulation-based method is proposed to solve the model. Numerical experiments are conducted to illustrate the features of the model and the computational performance of the solution algorithm.</p>","PeriodicalId":501141,"journal":{"name":"Networks and Spatial Economics","volume":"34 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140168291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-15DOI: 10.1007/s11067-024-09615-5
Abstract
In 2012, Censor et al. (Extensions of Korpelevich’s extragradient method for the variational inequality problem in Euclidean space. Optimization 61(9):1119–1132, 2012b) proposed the two-subgradient extragradient method (TSEGM). This method does not require computing projection onto the feasible (closed and convex) set, but rather the two projections are made onto some half-space. However, the convergence of the TSEGM was puzzling and hence posted as open question. Very recently, some authors were able to provide a partial answer to the open question by establishing weak convergence result for the TSEGM though under some stringent conditions. In this paper, we propose and study an inertial two-subgradient extragradient method (ITSEGM) for solving monotone variational inequality problems (VIPs). Under more relaxed conditions than the existing results in the literature, we prove that proposed method converges strongly to a minimum-norm solution of monotone VIPs in Hilbert spaces. Unlike several of the existing methods in the literature for solving VIPs, our method does not require any linesearch technique, which could be time-consuming to implement. Rather, we employ a simple but very efficient self-adaptive step size method that generates a non-monotonic sequence of step sizes. Moreover, we present several numerical experiments to demonstrate the efficiency of our proposed method in comparison with related results in the literature. Finally, we apply our result to image restoration problem. Our result in this paper improves and generalizes several of the existing results in the literature in this direction.
{"title":"Strong Convergent Inertial Two-subgradient Extragradient Method for Finding Minimum-norm Solutions of Variational Inequality Problems","authors":"","doi":"10.1007/s11067-024-09615-5","DOIUrl":"https://doi.org/10.1007/s11067-024-09615-5","url":null,"abstract":"<h3>Abstract</h3> <p>In 2012, Censor et al. (Extensions of Korpelevich’s extragradient method for the variational inequality problem in Euclidean space. Optimization 61(9):1119–1132, <span>2012b</span>) proposed the two-subgradient extragradient method (TSEGM). This method does not require computing projection onto the feasible (closed and convex) set, but rather the two projections are made onto some half-space. However, the convergence of the TSEGM was puzzling and hence posted as open question. Very recently, some authors were able to provide a partial answer to the open question by establishing weak convergence result for the TSEGM though under some stringent conditions. In this paper, we propose and study an inertial two-subgradient extragradient method (ITSEGM) for solving monotone variational inequality problems (VIPs). Under more relaxed conditions than the existing results in the literature, we prove that proposed method converges strongly to a minimum-norm solution of monotone VIPs in Hilbert spaces. Unlike several of the existing methods in the literature for solving VIPs, our method does not require any linesearch technique, which could be time-consuming to implement. Rather, we employ a simple but very efficient self-adaptive step size method that generates a non-monotonic sequence of step sizes. Moreover, we present several numerical experiments to demonstrate the efficiency of our proposed method in comparison with related results in the literature. Finally, we apply our result to image restoration problem. Our result in this paper improves and generalizes several of the existing results in the literature in this direction.</p>","PeriodicalId":501141,"journal":{"name":"Networks and Spatial Economics","volume":"98 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140156524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-08DOI: 10.1007/s11067-024-09618-2
Maocan Song, Lin Cheng, Huimin Ge, Chao Sun, Ruochen Wang
The mean-standard deviation shortest path problem (MSDSPP) incorporates the travel time variability into the routing optimization. The idea is that the decision-maker wants to minimize the travel time not only on average, but also to keep their variability as small as possible. Its objective is a linear combination of mean and standard deviation of travel times. This study focuses on the problem of finding the best-(K) optimal paths for the MSDSPP. We denote this problem as the KMSDSPP. When the travel time variability is neglected, the KMSDSPP reduces to a (K)-shortest path problem with expected routing costs. This paper develops two methods to solve the KMSDSPP, including a basic method and a deviation path-based method. To find the (k+1)th optimal path, the basic method adds (k) constraints to exclude the first-(k) optimal paths. Additionally, we introduce the deviation path concept and propose a deviation path-based method. To find the (k+1)th optimal path, the solution space that contains the (k)th optimal path is decomposed into several subspaces. We just need to search these subspaces to generate additional candidate paths and find the (k+1)th optimal path in the set of candidate paths. Numerical experiments are implemented in several transportation networks, showing that the deviation path-based method has superior performance than the basic method, especially for a large value of (K). Compared with the basic method, the deviation path-based method can save 90.1% CPU running time to find the best (1000) optimal paths in the Anaheim network.
{"title":"Finding the $$mathrm{K}$$ Mean-Standard Deviation Shortest Paths Under Travel Time Uncertainty","authors":"Maocan Song, Lin Cheng, Huimin Ge, Chao Sun, Ruochen Wang","doi":"10.1007/s11067-024-09618-2","DOIUrl":"https://doi.org/10.1007/s11067-024-09618-2","url":null,"abstract":"<p>The mean-standard deviation shortest path problem (MSDSPP) incorporates the travel time variability into the routing optimization. The idea is that the decision-maker wants to minimize the travel time not only on average, but also to keep their variability as small as possible. Its objective is a linear combination of mean and standard deviation of travel times. This study focuses on the problem of finding the best-<span>(K)</span> optimal paths for the MSDSPP. We denote this problem as the KMSDSPP. When the travel time variability is neglected, the KMSDSPP reduces to a <span>(K)</span>-shortest path problem with expected routing costs. This paper develops two methods to solve the KMSDSPP, including a basic method and a deviation path-based method. To find the <span>(k+1)</span>th optimal path, the basic method adds <span>(k)</span> constraints to exclude the first-<span>(k)</span> optimal paths. Additionally, we introduce the deviation path concept and propose a deviation path-based method. To find the <span>(k+1)</span>th optimal path, the solution space that contains the <span>(k)</span>th optimal path is decomposed into several subspaces. We just need to search these subspaces to generate additional candidate paths and find the <span>(k+1)</span>th optimal path in the set of candidate paths. Numerical experiments are implemented in several transportation networks, showing that the deviation path-based method has superior performance than the basic method, especially for a large value of <span>(K)</span>. Compared with the basic method, the deviation path-based method can save 90.1% CPU running time to find the best <span>(1000)</span> optimal paths in the Anaheim network.</p>","PeriodicalId":501141,"journal":{"name":"Networks and Spatial Economics","volume":"88 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140076696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-04DOI: 10.1007/s11067-024-09620-8
Amitrajeet Batabyal, Hamid Beladi
We think of the cleanup of water pollution in the Ganges river in India as a local public good and ask whether this cleanup ought to be decentralized or centralized. We depart from the existing literature on this subject in two important ways. First, we allow the heterogeneous spillovers from cleaning up water pollution to be positive or negative. Second, we focus on water pollution cleanup in three cities—Kanpur, Prayagraj, Varanasi—through which the Ganges flows. Our model sheds light on two broad issues. First, we characterize efficient water pollution cleanup in the three cities, we describe how much water pollution is cleaned up under decentralization, we describe the set of cleanup amounts under decentralization, and we discuss why pollution cleanup under decentralization is unlikely to be efficient. Second, we focus on centralization. We derive the tax paid by the inhabitants of the three cities for pollution cleanup, the benefit to a city inhabitant from water pollution cleanup, how majority voting determines how much pollution is cleaned up when the spillovers from cleanup are uniform, and finally, we compare the amounts of pollution cleaned up with majority voting with the efficient pollution cleanup amounts.
{"title":"Decentralized vs. Centralized Water Pollution Cleanup in the Ganges in a Model with Three Cities","authors":"Amitrajeet Batabyal, Hamid Beladi","doi":"10.1007/s11067-024-09620-8","DOIUrl":"https://doi.org/10.1007/s11067-024-09620-8","url":null,"abstract":"<p>We think of the cleanup of water pollution in the Ganges river in India as a local public good and ask whether this cleanup ought to be decentralized or centralized. We depart from the existing literature on this subject in two important ways. First, we allow the heterogeneous spillovers from cleaning up water pollution to be <i>positive</i> or <i>negative</i>. Second, we focus on water pollution cleanup in <i>three</i> cities—Kanpur, Prayagraj, Varanasi—through which the Ganges flows. Our model sheds light on two broad issues. First, we characterize efficient water pollution cleanup in the three cities, we describe how much water pollution is cleaned up under decentralization, we describe the set of cleanup amounts under decentralization, and we discuss why pollution cleanup under decentralization is unlikely to be efficient. Second, we focus on centralization. We derive the tax paid by the inhabitants of the three cities for pollution cleanup, the benefit to a city inhabitant from water pollution cleanup, how majority voting determines how much pollution is cleaned up when the spillovers from cleanup are uniform, and finally, we compare the amounts of pollution cleaned up with majority voting with the efficient pollution cleanup amounts.</p>","PeriodicalId":501141,"journal":{"name":"Networks and Spatial Economics","volume":"30 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140026100","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}