Fernando Torre, David Pitchford, Phil Brown, L. Terveen
Analysis of geographic data often requires matching GPS traces to road segments. Unfortunately, map data is often incomplete, resulting in failed or incorrect matches. In this paper, we extend an HMM map-matching algorithm to handle missing blocks. We test our algorithm using map data from the Cyclopath geowiki and GPS traces from Cyclopath's mobile app. Even for conservative cutoff distances, our algorithm found a significant amount of missing data per set of GPS traces. We tested the algorithm for accuracy by removing existing blocks from our map dataset. As the cutoff distance was lowered, false negatives were decreased from 34% to 16% as false positives increased from 5% to 10%. Although the algorithm degrades with increasing amounts of missing data, our results show that our extensions have the potential to improve both map matches and map data.
{"title":"Matching GPS traces to (possibly) incomplete map data: bridging map building and map matching","authors":"Fernando Torre, David Pitchford, Phil Brown, L. Terveen","doi":"10.1145/2424321.2424411","DOIUrl":"https://doi.org/10.1145/2424321.2424411","url":null,"abstract":"Analysis of geographic data often requires matching GPS traces to road segments. Unfortunately, map data is often incomplete, resulting in failed or incorrect matches. In this paper, we extend an HMM map-matching algorithm to handle missing blocks. We test our algorithm using map data from the Cyclopath geowiki and GPS traces from Cyclopath's mobile app. Even for conservative cutoff distances, our algorithm found a significant amount of missing data per set of GPS traces. We tested the algorithm for accuracy by removing existing blocks from our map dataset. As the cutoff distance was lowered, false negatives were decreased from 34% to 16% as false positives increased from 5% to 10%. Although the algorithm degrades with increasing amounts of missing data, our results show that our extensions have the potential to improve both map matches and map data.","PeriodicalId":210150,"journal":{"name":"Proceedings of the 20th International Conference on Advances in Geographic Information Systems","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116675090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chenjuan Guo, Yu Ma, B. Yang, Christian S. Jensen, Manohar Kaul
The reduction of greenhouse gas (GHG) emissions from transportation is essential for achieving politically agreed upon emissions reduction targets that aim to combat global climate change. So-called eco-routing and eco-driving are able to substantially reduce GHG emissions caused by vehicular transportation. To enable these, it is necessary to be able to reliably quantify the emissions of vehicles as they travel in a spatial network. Thus, a number of models have been proposed that aim to quantify the emissions of a vehicle based on GPS data from the vehicle and a 3D model of the spatial network the vehicle travels in. We develop an evaluation framework, called EcoMark, for such environmental impact models. In addition, we survey all eleven state-of-the-art impact models known to us. To gain insight into the capabilities of the models and to understand the effectiveness of the EcoMark, we apply the framework to all models.
{"title":"EcoMark: evaluating models of vehicular environmental impact","authors":"Chenjuan Guo, Yu Ma, B. Yang, Christian S. Jensen, Manohar Kaul","doi":"10.1145/2424321.2424356","DOIUrl":"https://doi.org/10.1145/2424321.2424356","url":null,"abstract":"The reduction of greenhouse gas (GHG) emissions from transportation is essential for achieving politically agreed upon emissions reduction targets that aim to combat global climate change. So-called eco-routing and eco-driving are able to substantially reduce GHG emissions caused by vehicular transportation. To enable these, it is necessary to be able to reliably quantify the emissions of vehicles as they travel in a spatial network. Thus, a number of models have been proposed that aim to quantify the emissions of a vehicle based on GPS data from the vehicle and a 3D model of the spatial network the vehicle travels in. We develop an evaluation framework, called EcoMark, for such environmental impact models. In addition, we survey all eleven state-of-the-art impact models known to us. To gain insight into the capabilities of the models and to understand the effectiveness of the EcoMark, we apply the framework to all models.","PeriodicalId":210150,"journal":{"name":"Proceedings of the 20th International Conference on Advances in Geographic Information Systems","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129493856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents the Succinct Moving Object Index (SMO - Index) that pursues efficiency in storage and time of query processing for timestamp and interval queries. The data structure stores data and index together in a compact manner reducing the need of using external memory. It is based on a K2-tree to store snapshots of objects' location at some time instants, and on a compact representation of the movement of objects between consecutive snapshots. The experimental evaluation shows that the SMO-Index overcomes MVR-Tree in space used and time cost when objects constantly move at similar speed.
{"title":"The SMO-index: a succinct moving object structure for timestamp and interval queries","authors":"M. Romero, N. Brisaboa, Michael A. Rodriguez","doi":"10.1145/2424321.2424399","DOIUrl":"https://doi.org/10.1145/2424321.2424399","url":null,"abstract":"This paper presents the Succinct Moving Object Index (SMO - Index) that pursues efficiency in storage and time of query processing for timestamp and interval queries. The data structure stores data and index together in a compact manner reducing the need of using external memory. It is based on a K2-tree to store snapshots of objects' location at some time instants, and on a compact representation of the movement of objects between consecutive snapshots. The experimental evaluation shows that the SMO-Index overcomes MVR-Tree in space used and time cost when objects constantly move at similar speed.","PeriodicalId":210150,"journal":{"name":"Proceedings of the 20th International Conference on Advances in Geographic Information Systems","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121173279","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present algorithms for simplifying and clustering patterns from sensors such as GPS, LiDAR, and other devices that can produce high-dimensional signals. The algorithms are suitable for handling very large (e.g. terabytes) streaming data and can be run in parallel on networks or clouds. Applications include compression, denoising, activity recognition, road matching, and map generation. We encode these problems as (k, m)-segment mean problems. Formally, we provide (1 + ε)-approximations to the k-segment and (k, m)-segment mean of a d-dimensional discrete-time signal. The k-segment mean is a k-piecewise linear function that minimizes the regression distance to the signal. The (k,m)-segment mean has an additional constraint that the projection of the k segments on Rd consists of only m ≤ k segments. Existing algorithms for these problems take O(kn2) and nO(mk) time respectively and O(kn2) space, where n is the length of the signal. Our main tool is a new coreset for discrete-time signals. The coreset is a smart compression of the input signal that allows computation of a (1 + ε)-approximation to the k-segment or (k,m)-segment mean in O(n log n) time for arbitrary constants ε,k, and m. We use coresets to obtain a parallel algorithm that scans the signal in one pass, using space and update time per point that is polynomial in log n. We provide empirical evaluations of the quality of our coreset and experimental results that show how our coreset boosts both inefficient optimal algorithms and existing heuristics. We demonstrate our results for extracting signals from GPS traces. However, the results are more general and applicable to other types of sensors.
{"title":"The single pixel GPS: learning big data signals from tiny coresets","authors":"Dan Feldman, C. Sung, D. Rus","doi":"10.1145/2424321.2424325","DOIUrl":"https://doi.org/10.1145/2424321.2424325","url":null,"abstract":"We present algorithms for simplifying and clustering patterns from sensors such as GPS, LiDAR, and other devices that can produce high-dimensional signals. The algorithms are suitable for handling very large (e.g. terabytes) streaming data and can be run in parallel on networks or clouds. Applications include compression, denoising, activity recognition, road matching, and map generation. We encode these problems as (k, m)-segment mean problems. Formally, we provide (1 + ε)-approximations to the k-segment and (k, m)-segment mean of a d-dimensional discrete-time signal. The k-segment mean is a k-piecewise linear function that minimizes the regression distance to the signal. The (k,m)-segment mean has an additional constraint that the projection of the k segments on Rd consists of only m ≤ k segments. Existing algorithms for these problems take O(kn2) and nO(mk) time respectively and O(kn2) space, where n is the length of the signal. Our main tool is a new coreset for discrete-time signals. The coreset is a smart compression of the input signal that allows computation of a (1 + ε)-approximation to the k-segment or (k,m)-segment mean in O(n log n) time for arbitrary constants ε,k, and m. We use coresets to obtain a parallel algorithm that scans the signal in one pass, using space and update time per point that is polynomial in log n. We provide empirical evaluations of the quality of our coreset and experimental results that show how our coreset boosts both inefficient optimal algorithms and existing heuristics. We demonstrate our results for extracting signals from GPS traces. However, the results are more general and applicable to other types of sensors.","PeriodicalId":210150,"journal":{"name":"Proceedings of the 20th International Conference on Advances in Geographic Information Systems","volume":"172 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133006947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Moving object data, in particular of mobile users, is becoming widely available. A GPS trajectory of a moving object is a time-stamped sequence of latitude and longitude coordinates. The analysis and extraction of knowledge from GPS trajectories is important for a range of applications. Existing studies have extracted knowledge from trajectory patterns for both single and multiple GPS trajectories. However, few works have taken into account the unreliability of GPS measurements for mobile devices or focused on the extraction of fine-grained events from a user's GPS trajectory, such as waiting in traffic, at an intersection, or at a bus stop. In this paper, we develop and experimentally evaluate a novel algorithm that analyses a mobile user's bearing change distribution, together with speed and acceleration, to extract significant places of events from their GPS trajectory.
{"title":"Extracting significant places from mobile user GPS trajectories: a bearing change based approach","authors":"T. Bhattacharya, L. Kulik, J. Bailey","doi":"10.1145/2424321.2424374","DOIUrl":"https://doi.org/10.1145/2424321.2424374","url":null,"abstract":"Moving object data, in particular of mobile users, is becoming widely available. A GPS trajectory of a moving object is a time-stamped sequence of latitude and longitude coordinates. The analysis and extraction of knowledge from GPS trajectories is important for a range of applications. Existing studies have extracted knowledge from trajectory patterns for both single and multiple GPS trajectories. However, few works have taken into account the unreliability of GPS measurements for mobile devices or focused on the extraction of fine-grained events from a user's GPS trajectory, such as waiting in traffic, at an intersection, or at a bus stop. In this paper, we develop and experimentally evaluate a novel algorithm that analyses a mobile user's bearing change distribution, together with speed and acceleration, to extract significant places of events from their GPS trajectory.","PeriodicalId":210150,"journal":{"name":"Proceedings of the 20th International Conference on Advances in Geographic Information Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131300941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The popularity of location-based social networks provide us with a new platform to understand users' preferences based on their location histories. In this paper, we present a location-based and preference-aware recommender system that offers a particular user a set of venues (such as restaurants) within a geospatial range with the consideration of both: 1) User preferences, which are automatically learned from her location history and 2) Social opinions, which are mined from the location histories of the local experts. This recommender system can facilitate people's travel not only near their living areas but also to a city that is new to them. As a user can only visit a limited number of locations, the user-locations matrix is very sparse, leading to a big challenge to traditional collaborative filtering-based location recommender systems. The problem becomes even more challenging when people travel to a new city. To this end, we propose a novel location recommender system, which consists of two main parts: offline modeling and online recommendation. The offline modeling part models each individual's personal preferences with a weighted category hierarchy (WCH) and infers the expertise of each user in a city with respect to different category of locations according to their location histories using an iterative learning model. The online recommendation part selects candidate local experts in a geospatial range that matches the user's preferences using a preference-aware candidate selection algorithm and then infers a score of the candidate locations based on the opinions of the selected local experts. Finally, the top-k ranked locations are returned as the recommendations for the user. We evaluated our system with a large-scale real dataset collected from Foursquare. The results confirm that our method offers more effective recommendations than baselines, while having a good efficiency of providing location recommendations.
{"title":"Location-based and preference-aware recommendation using sparse geo-social networking data","authors":"Jie Bao, Yu Zheng, M. Mokbel","doi":"10.1145/2424321.2424348","DOIUrl":"https://doi.org/10.1145/2424321.2424348","url":null,"abstract":"The popularity of location-based social networks provide us with a new platform to understand users' preferences based on their location histories. In this paper, we present a location-based and preference-aware recommender system that offers a particular user a set of venues (such as restaurants) within a geospatial range with the consideration of both: 1) User preferences, which are automatically learned from her location history and 2) Social opinions, which are mined from the location histories of the local experts. This recommender system can facilitate people's travel not only near their living areas but also to a city that is new to them. As a user can only visit a limited number of locations, the user-locations matrix is very sparse, leading to a big challenge to traditional collaborative filtering-based location recommender systems. The problem becomes even more challenging when people travel to a new city. To this end, we propose a novel location recommender system, which consists of two main parts: offline modeling and online recommendation. The offline modeling part models each individual's personal preferences with a weighted category hierarchy (WCH) and infers the expertise of each user in a city with respect to different category of locations according to their location histories using an iterative learning model. The online recommendation part selects candidate local experts in a geospatial range that matches the user's preferences using a preference-aware candidate selection algorithm and then infers a score of the candidate locations based on the opinions of the selected local experts. Finally, the top-k ranked locations are returned as the recommendations for the user. We evaluated our system with a large-scale real dataset collected from Foursquare. The results confirm that our method offers more effective recommendations than baselines, while having a good efficiency of providing location recommendations.","PeriodicalId":210150,"journal":{"name":"Proceedings of the 20th International Conference on Advances in Geographic Information Systems","volume":"103 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114612344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In many GIS applications it is important to study the characteristics of a raster data set at multiple resolutions. Often this is done by generating several coarser resolution rasters from a fine resolution raster. In this paper we describe efficient algorithms for different variants of this problem. Given a raster G of √N × √N cells we first consider the problem of computing for every 2 ≤ μ ≤ √N a raster Gμ of √N/μ × √N/μ cells such that each cell of Gμ stores the average of the values of μ × μ cells of G. We describe an algorithm that solves this problem in Θ(N) time when the handled data fit in the main memory of the computer. We also provide two algorithms that solve this problem in external memory, that is when the input raster is larger than the main memory. The first external algorithm is very easy to implement and requires O(sort(N)) data block transfers from/to the external memory, and the second algorithm requires only O(scan(N)) transfers, where sort(N) and scan(N) are the number of transfers needed to sort and scan N elements, respectively. We also study a variant of the problem where instead of the full input raster we handle only a connected subregion of arbitrary shape. For this variant we describe an algorithm that runs in Θ(U log N) time in internal memory, where U is the size of the output. We show how this algorithm can be adapted to perform efficiently in the external memory using O(sort(U)) data transfers from the disk. We have also implemented two of the presented algorithms, the O(sort(N)) external memory algorithm for full rasters, and the internal memory algorithm that handles connected subregions, and we demonstrate their efficiency in practice.
{"title":"Fast generation of multiple resolution instances of raster data sets","authors":"L. Arge, H. Haverkort, Constantinos Tsirogiannis","doi":"10.1145/2424321.2424329","DOIUrl":"https://doi.org/10.1145/2424321.2424329","url":null,"abstract":"In many GIS applications it is important to study the characteristics of a raster data set at multiple resolutions. Often this is done by generating several coarser resolution rasters from a fine resolution raster. In this paper we describe efficient algorithms for different variants of this problem. Given a raster G of √N × √N cells we first consider the problem of computing for every 2 ≤ μ ≤ √N a raster Gμ of √N/μ × √N/μ cells such that each cell of Gμ stores the average of the values of μ × μ cells of G. We describe an algorithm that solves this problem in Θ(N) time when the handled data fit in the main memory of the computer. We also provide two algorithms that solve this problem in external memory, that is when the input raster is larger than the main memory. The first external algorithm is very easy to implement and requires O(sort(N)) data block transfers from/to the external memory, and the second algorithm requires only O(scan(N)) transfers, where sort(N) and scan(N) are the number of transfers needed to sort and scan N elements, respectively. We also study a variant of the problem where instead of the full input raster we handle only a connected subregion of arbitrary shape. For this variant we describe an algorithm that runs in Θ(U log N) time in internal memory, where U is the size of the output. We show how this algorithm can be adapted to perform efficiently in the external memory using O(sort(U)) data transfers from the disk. We have also implemented two of the presented algorithms, the O(sort(N)) external memory algorithm for full rasters, and the internal memory algorithm that handles connected subregions, and we demonstrate their efficiency in practice.","PeriodicalId":210150,"journal":{"name":"Proceedings of the 20th International Conference on Advances in Geographic Information Systems","volume":"103 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123441599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper explores the use of textual and event-based citizen-generated data from services such as Twitter and Foursquare to study urban dynamics. It applies a probabilistic topic model to obtain a decomposition of the stream of digital traces into a set of urban topics related to various activities of the citizens in the course of a week. Due to the combined use of implicit textual and movement data, we obtain semantically rich modalities of the urban dynamics and overcome the drawbacks of several previous attempts. Other important advantages of our method include its flexibility and robustness with respect to the varying quality and volume of the incoming data. We describe an implementation architecture of the system, the main outputs of the analysis, and the derived exploratory visualisations. Finally, we discuss the implications of our methodology for enriching location-based services with real-time context.
{"title":"When a city tells a story: urban topic analysis","authors":"Felix Kling, A. Pozdnoukhov","doi":"10.1145/2424321.2424395","DOIUrl":"https://doi.org/10.1145/2424321.2424395","url":null,"abstract":"This paper explores the use of textual and event-based citizen-generated data from services such as Twitter and Foursquare to study urban dynamics. It applies a probabilistic topic model to obtain a decomposition of the stream of digital traces into a set of urban topics related to various activities of the citizens in the course of a week. Due to the combined use of implicit textual and movement data, we obtain semantically rich modalities of the urban dynamics and overcome the drawbacks of several previous attempts. Other important advantages of our method include its flexibility and robustness with respect to the varying quality and volume of the incoming data. We describe an implementation architecture of the system, the main outputs of the analysis, and the derived exploratory visualisations. Finally, we discuss the implications of our methodology for enriching location-based services with real-time context.","PeriodicalId":210150,"journal":{"name":"Proceedings of the 20th International Conference on Advances in Geographic Information Systems","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128406289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
One of the objectives of spatio-temporal data mining is to analyze moving object datasets to exploit interesting patterns. Traditionally, existing methods only focus on an unchanged group of moving objects during a time period. Thus, they cannot capture object moving trends which can be very useful for better understanding the natural moving behavior in various real world applications. In this paper, we present a novel concept of "time relaxed gradual trajectory pattern", denoted real-Gpattern, which captures the object movement tendency. Additionally, we also propose an efficient algorithm, called ClusterGrowth, designed to extract the complete set of all interesting maximal real-Gpatterns. Conducted experiments on real and large synthetic datasets demonstrate the effectiveness, parameter sensitiveness and efficiency of our methods.
{"title":"Mining time relaxed gradual moving object clusters","authors":"P. Hai, D. Ienco, P. Poncelet, M. Teisseire","doi":"10.1145/2424321.2424394","DOIUrl":"https://doi.org/10.1145/2424321.2424394","url":null,"abstract":"One of the objectives of spatio-temporal data mining is to analyze moving object datasets to exploit interesting patterns. Traditionally, existing methods only focus on an unchanged group of moving objects during a time period. Thus, they cannot capture object moving trends which can be very useful for better understanding the natural moving behavior in various real world applications. In this paper, we present a novel concept of \"time relaxed gradual trajectory pattern\", denoted real-Gpattern, which captures the object movement tendency. Additionally, we also propose an efficient algorithm, called ClusterGrowth, designed to extract the complete set of all interesting maximal real-Gpatterns. Conducted experiments on real and large synthetic datasets demonstrate the effectiveness, parameter sensitiveness and efficiency of our methods.","PeriodicalId":210150,"journal":{"name":"Proceedings of the 20th International Conference on Advances in Geographic Information Systems","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123118386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The proliferation of mobile devices, location-based services and embedded wireless sensors has given rise to applications that seek to improve the efficiency of the transportation system. In particular, new applications are already available that help travelers to find parking in urban settings by conveying the parking slot availability near the desired destinations of travelers on their mobile devices. In this paper we present two notions of parking choice: the optimal and the equilibrium. The equilibrium describes the behavior of individual, selfish agents in a system. We will show how a pricing authority can use the parking availability information to set prices that entice drivers to choose parking in the optimal way, the way that minimizes total driving distance by the vehicles and is then better for the transportation system (by reducing congestion) and for the environment. We will present two pricing schemes that perform this task. Furthermore, through simulations we show the potential congestion improvements that can be obtained through the use of these schemes.
{"title":"Pricing of parking for congestion reduction","authors":"D. Ayala, O. Wolfson, Bo Xu, B. Dasgupta, Jie Lin","doi":"10.1145/2424321.2424328","DOIUrl":"https://doi.org/10.1145/2424321.2424328","url":null,"abstract":"The proliferation of mobile devices, location-based services and embedded wireless sensors has given rise to applications that seek to improve the efficiency of the transportation system. In particular, new applications are already available that help travelers to find parking in urban settings by conveying the parking slot availability near the desired destinations of travelers on their mobile devices. In this paper we present two notions of parking choice: the optimal and the equilibrium. The equilibrium describes the behavior of individual, selfish agents in a system. We will show how a pricing authority can use the parking availability information to set prices that entice drivers to choose parking in the optimal way, the way that minimizes total driving distance by the vehicles and is then better for the transportation system (by reducing congestion) and for the environment. We will present two pricing schemes that perform this task. Furthermore, through simulations we show the potential congestion improvements that can be obtained through the use of these schemes.","PeriodicalId":210150,"journal":{"name":"Proceedings of the 20th International Conference on Advances in Geographic Information Systems","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131420603","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}