Light Detection and Ranging (LiDAR) sensors generate dense point clouds that can be used to map forest structures at a high spatial resolution level. In this work, we consider the problem of identifying individual trees in a LiDAR point cloud. Existing techniques generally require intense parameter tuning and user interactions. Our goal is defining an automatic approach capable of providing robust results with minimal user interactions. To this end, we define a segmentation algorithm based on the watershed transform and persistence-based simplification. The proposed algorithm uses a divide-and-conquer technique to split a LiDAR point cloud into regions with uniform density. Within each region, single trees are identified by applying a segmentation approach based on watershed by simulated immersion. Experiments show that our approach performs better than state-of-the-art algorithms on most of the study areas in the benchmark provided by the NEW technologies for a better mountain FORest timber mobilization (NEWFOR) project. Moreover, our approach requires a single (Boolean) parameter. This makes our approach well suited for a wide range of forest analysis applications, including biomass estimation, or field inventory surveys.
{"title":"A Persistence-Based Approach for Individual Tree Mapping","authors":"Xin Xu, F. Iuricich, L. Floriani","doi":"10.1145/3397536.3422231","DOIUrl":"https://doi.org/10.1145/3397536.3422231","url":null,"abstract":"Light Detection and Ranging (LiDAR) sensors generate dense point clouds that can be used to map forest structures at a high spatial resolution level. In this work, we consider the problem of identifying individual trees in a LiDAR point cloud. Existing techniques generally require intense parameter tuning and user interactions. Our goal is defining an automatic approach capable of providing robust results with minimal user interactions. To this end, we define a segmentation algorithm based on the watershed transform and persistence-based simplification. The proposed algorithm uses a divide-and-conquer technique to split a LiDAR point cloud into regions with uniform density. Within each region, single trees are identified by applying a segmentation approach based on watershed by simulated immersion. Experiments show that our approach performs better than state-of-the-art algorithms on most of the study areas in the benchmark provided by the NEW technologies for a better mountain FORest timber mobilization (NEWFOR) project. Moreover, our approach requires a single (Boolean) parameter. This makes our approach well suited for a wide range of forest analysis applications, including biomass estimation, or field inventory surveys.","PeriodicalId":233918,"journal":{"name":"Proceedings of the 28th International Conference on Advances in Geographic Information Systems","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117092791","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Boyi Xie, Jeri Xu, Jungkyo Jung, S. Yun, Eric Zeng, E. Brooks, Michaela Dolk, Lokeshkumar Narasimhalu
Satellite radar imaging from SAR (Synthetic Aperture Radar) is a remote sensing technology that captures ground surface level changes at a relatively high resolution. This technology has been used in many applications, one of which is the estimation of damages after natural disasters, such as wildfire, earthquake, and hurricane events. An efficient and accurate assessment of damages after natural catastrophe events allows public and private sectors to quickly respond in order to mitigate losses and to better prepare for disaster relief. Advances in machine learning and image processing techniques can be applied to this dataset to survey large areas and estimate property damages. In this paper, we introduce a machine learning-based approach for taking satellite radar images and geographical data as inputs to classify the damage status of individual buildings after a major wildfire event. We believe the demonstration of this damage estimation methodology and its application to real world natural disaster events will have a high potential to improve social resilience.
{"title":"Machine Learning on Satellite Radar Images to Estimate Damages After Natural Disasters","authors":"Boyi Xie, Jeri Xu, Jungkyo Jung, S. Yun, Eric Zeng, E. Brooks, Michaela Dolk, Lokeshkumar Narasimhalu","doi":"10.1145/3397536.3422349","DOIUrl":"https://doi.org/10.1145/3397536.3422349","url":null,"abstract":"Satellite radar imaging from SAR (Synthetic Aperture Radar) is a remote sensing technology that captures ground surface level changes at a relatively high resolution. This technology has been used in many applications, one of which is the estimation of damages after natural disasters, such as wildfire, earthquake, and hurricane events. An efficient and accurate assessment of damages after natural catastrophe events allows public and private sectors to quickly respond in order to mitigate losses and to better prepare for disaster relief. Advances in machine learning and image processing techniques can be applied to this dataset to survey large areas and estimate property damages. In this paper, we introduce a machine learning-based approach for taking satellite radar images and geographical data as inputs to classify the damage status of individual buildings after a major wildfire event. We believe the demonstration of this damage estimation methodology and its application to real world natural disaster events will have a high potential to improve social resilience.","PeriodicalId":233918,"journal":{"name":"Proceedings of the 28th International Conference on Advances in Geographic Information Systems","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124849758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fares Tabet, Birva H. Patel, K. Dinçer, Harsh Govind, Peiwei Cao, Ashley Song, Mohamed H. Ali
As an open license project, Open Street Map (OSM) aims to make the collectively produced geographic data freely available to be used for various purposes. Routing engines frequently take advantage of this data set. Nonetheless, providing routing services on top of OSM requires the full connectivity of the OSM road network graph in the interest area. This connectivity needs to be achieved individually at every level of the road network graph: the motorway, trunk, primary, secondary, tertiary, and residential roads. However, due to its open-editing nature, the OSM data often contains faults attributed to issues like missing road network connections or mistakenly attributed road segments. In this paper, we demonstrate a system we have developed that helps the end-user (i.e., cartographer) discover and fix the connectivity errors in an OSM road network graph. More specifically, the system aims to achieve full connectivity in the overall road network graph, which in turn requires full connectivity at each road level. The system automatically detects the connectivity errors that would otherwise remain undetected or need a lengthy manual process to discover. It can accept hints from the editor through its easy to use graphical user interface to investigate errors further, improve the detection process, and subsequently fix them. Based on our pilot runs in New Zealand with the supervision of professional cartographers and a team from Microsoft Geospatial, we were able to detect more than 300 incorrect connections and to achieve connectivity across different road levels.
{"title":"A Semi-Automated System for Exploring and Fixing OSM Connectivity","authors":"Fares Tabet, Birva H. Patel, K. Dinçer, Harsh Govind, Peiwei Cao, Ashley Song, Mohamed H. Ali","doi":"10.1145/3397536.3422347","DOIUrl":"https://doi.org/10.1145/3397536.3422347","url":null,"abstract":"As an open license project, Open Street Map (OSM) aims to make the collectively produced geographic data freely available to be used for various purposes. Routing engines frequently take advantage of this data set. Nonetheless, providing routing services on top of OSM requires the full connectivity of the OSM road network graph in the interest area. This connectivity needs to be achieved individually at every level of the road network graph: the motorway, trunk, primary, secondary, tertiary, and residential roads. However, due to its open-editing nature, the OSM data often contains faults attributed to issues like missing road network connections or mistakenly attributed road segments. In this paper, we demonstrate a system we have developed that helps the end-user (i.e., cartographer) discover and fix the connectivity errors in an OSM road network graph. More specifically, the system aims to achieve full connectivity in the overall road network graph, which in turn requires full connectivity at each road level. The system automatically detects the connectivity errors that would otherwise remain undetected or need a lengthy manual process to discover. It can accept hints from the editor through its easy to use graphical user interface to investigate errors further, improve the detection process, and subsequently fix them. Based on our pilot runs in New Zealand with the supervision of professional cartographers and a team from Microsoft Geospatial, we were able to detect more than 300 incorrect connections and to achieve connectivity across different road levels.","PeriodicalId":233918,"journal":{"name":"Proceedings of the 28th International Conference on Advances in Geographic Information Systems","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128921539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Milutin Brankovic, K. Buchin, Koen Klaren, A. Nusser, Aleksandr Popov, Sampson Wong
Due to the massively increasing amount of available geospatial data and the need to present it in an understandable way, clustering this data is more important than ever. As clusters might contain a large number of objects, having a representative for each cluster significantly facilitates understanding a clustering. Clustering methods relying on such representatives are called center-based. In this work we consider the problem of center-based clustering of trajectories. In this setting, the representative of a cluster is again a trajectory. To obtain a compact representation of the clusters and to avoid overfitting, we restrict the complexity of the representative trajectories by a parameter l. This restriction, however, makes discrete distance measures like dynamic time warping (DTW) less suited. There is recent work on center-based clustering of trajectories with a continuous distance measure, namely, the Fréchet distance. While the Fréchet distance allows for restriction of the center complexity, it can also be sensitive to outliers, whereas averaging-type distance measures, like DTW, are less so. To obtain a trajectory clustering algorithm that allows restricting center complexity and is more robust to outliers, we propose the usage of a continuous version of DTW as distance measure, which we call continuous dynamic time warping (CDTW). Our contribution is twofold: (1) To combat the lack of practical algorithms for CDTW, we develop an approximation algorithm that computes it. (2) We develop the first clustering algorithm under this distance measure and show a practical way to compute a center from a set of trajectories and subsequently iteratively improve it. To obtain insights into the results of clustering under CDTW on practical data, we conduct extensive experiments.
{"title":"(k, l)-Medians Clustering of Trajectories Using Continuous Dynamic Time Warping","authors":"Milutin Brankovic, K. Buchin, Koen Klaren, A. Nusser, Aleksandr Popov, Sampson Wong","doi":"10.1145/3397536.3422245","DOIUrl":"https://doi.org/10.1145/3397536.3422245","url":null,"abstract":"Due to the massively increasing amount of available geospatial data and the need to present it in an understandable way, clustering this data is more important than ever. As clusters might contain a large number of objects, having a representative for each cluster significantly facilitates understanding a clustering. Clustering methods relying on such representatives are called center-based. In this work we consider the problem of center-based clustering of trajectories. In this setting, the representative of a cluster is again a trajectory. To obtain a compact representation of the clusters and to avoid overfitting, we restrict the complexity of the representative trajectories by a parameter l. This restriction, however, makes discrete distance measures like dynamic time warping (DTW) less suited. There is recent work on center-based clustering of trajectories with a continuous distance measure, namely, the Fréchet distance. While the Fréchet distance allows for restriction of the center complexity, it can also be sensitive to outliers, whereas averaging-type distance measures, like DTW, are less so. To obtain a trajectory clustering algorithm that allows restricting center complexity and is more robust to outliers, we propose the usage of a continuous version of DTW as distance measure, which we call continuous dynamic time warping (CDTW). Our contribution is twofold: (1) To combat the lack of practical algorithms for CDTW, we develop an approximation algorithm that computes it. (2) We develop the first clustering algorithm under this distance measure and show a practical way to compute a center from a set of trajectories and subsequently iteratively improve it. To obtain insights into the results of clustering under CDTW on practical data, we conduct extensive experiments.","PeriodicalId":233918,"journal":{"name":"Proceedings of the 28th International Conference on Advances in Geographic Information Systems","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121844773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In mobile crowdsourcing, workers are financially motivated to perform self-selected tasks to maximize their revenue. Unfortunately, the existing task scheduling approaches in mobile crowdsourcing fail to scale for massive tasks and large geographic areas. We present Turbo-GTS, a system that assigns tasks to each worker to maximize the total number of the tasks that can be completed for an entire worker group while taking into account various spatial and temporal constraints, such as task execution duration, task expiration time, and worker/task geographic locations. The core of Turbo-GTS is WBT-NNH and WBT-NUD, our two newly developed scheduling algorithms, which build on the algorithms, QT-NNH and QT-NUD, proposed in our prior work [5]. The key idea is that Turbo-GTS performs dynamic workload balancing among all workers using the proposed Workload-balancing Bisection Tree (WBT) in support of large-scale Geo-Task Scheduling (GTS). Turbo-GTS includes an interactive interface for users to load the current task/worker distributions and compare the task assignment of each worker returned by different algorithms in a real-time fashion. Using the Foursquare mobile user check-in data in New York City and Tokyo, we show the superiority of Turbo-GTS over the state of the art in terms of the total number of the tasks that can be accomplished by the entire worker group and the corresponding running time. We also demonstrate the front-end interface of Turbo-GTS with two exploratory use cases in New York City.
{"title":"Turbo-GTS: Scaling Mobile Crowdsourcing using Workload-Balancing Bisection Tree","authors":"W. Li, Haiquan Chen, Wei-Shinn Ku, X. Qin","doi":"10.1145/3397536.3422335","DOIUrl":"https://doi.org/10.1145/3397536.3422335","url":null,"abstract":"In mobile crowdsourcing, workers are financially motivated to perform self-selected tasks to maximize their revenue. Unfortunately, the existing task scheduling approaches in mobile crowdsourcing fail to scale for massive tasks and large geographic areas. We present Turbo-GTS, a system that assigns tasks to each worker to maximize the total number of the tasks that can be completed for an entire worker group while taking into account various spatial and temporal constraints, such as task execution duration, task expiration time, and worker/task geographic locations. The core of Turbo-GTS is WBT-NNH and WBT-NUD, our two newly developed scheduling algorithms, which build on the algorithms, QT-NNH and QT-NUD, proposed in our prior work [5]. The key idea is that Turbo-GTS performs dynamic workload balancing among all workers using the proposed Workload-balancing Bisection Tree (WBT) in support of large-scale Geo-Task Scheduling (GTS). Turbo-GTS includes an interactive interface for users to load the current task/worker distributions and compare the task assignment of each worker returned by different algorithms in a real-time fashion. Using the Foursquare mobile user check-in data in New York City and Tokyo, we show the superiority of Turbo-GTS over the state of the art in terms of the total number of the tasks that can be accomplished by the entire worker group and the corresponding running time. We also demonstrate the front-end interface of Turbo-GTS with two exploratory use cases in New York City.","PeriodicalId":233918,"journal":{"name":"Proceedings of the 28th International Conference on Advances in Geographic Information Systems","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121276590","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Traffic prediction is a challenging task due to the time-varying nature of traffic patterns and the complex spatial dependency of road networks. Adding to the challenge, there are a number of errors introduced in traffic sensor reporting, including bias and noise. However, most of the previous works treat the sensor observations as exact measures ignoring the effect of unknown noise. To model the spatial and temporal dependencies, existing studies combine graph neural networks (GNNs) with other deep learning techniques but their equal weighting of different dependencies limits the models' ability to capture the real dynamics in the traffic network. To deal with the above issues, we propose a novel deep learning framework called Deep Kalman Filtering Network (DKFN) to forecast the network-wide traffic state by modeling the self and neighbor dependencies as two streams, and their predictions are fused under the statistical theory and optimized through the Kalman filtering network. First, the reliability of each stream is evaluated using variances. Then, the Kalman filter is leveraged to properly fuse noisy observations in terms of their reliability. Experimental results reflect the superiority of the proposed method over baseline models on two real-world traffic datasets in the speed prediction task.
{"title":"Graph Convolutional Networks with Kalman Filtering for Traffic Prediction","authors":"Fanglan Chen, Zhiqian Chen, Subhodip Biswas, Shuo Lei, Naren Ramakrishnan, Chang-Tien Lu","doi":"10.1145/3397536.3422257","DOIUrl":"https://doi.org/10.1145/3397536.3422257","url":null,"abstract":"Traffic prediction is a challenging task due to the time-varying nature of traffic patterns and the complex spatial dependency of road networks. Adding to the challenge, there are a number of errors introduced in traffic sensor reporting, including bias and noise. However, most of the previous works treat the sensor observations as exact measures ignoring the effect of unknown noise. To model the spatial and temporal dependencies, existing studies combine graph neural networks (GNNs) with other deep learning techniques but their equal weighting of different dependencies limits the models' ability to capture the real dynamics in the traffic network. To deal with the above issues, we propose a novel deep learning framework called Deep Kalman Filtering Network (DKFN) to forecast the network-wide traffic state by modeling the self and neighbor dependencies as two streams, and their predictions are fused under the statistical theory and optimized through the Kalman filtering network. First, the reliability of each stream is evaluated using variances. Then, the Kalman filter is leveraged to properly fuse noisy observations in terms of their reliability. Experimental results reflect the superiority of the proposed method over baseline models on two real-world traffic datasets in the speed prediction task.","PeriodicalId":233918,"journal":{"name":"Proceedings of the 28th International Conference on Advances in Geographic Information Systems","volume":"35 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126771059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Location fingerprinting is a technique for determining the location of a device by measuring ambient signals such as radio signal strength, temperature, or any signal that varies with location. The accuracy of the technique is compromised by signal noise, quantization, and limited calibration resources. We develop generic, probabilistic models of location fingerprinting to find accuracy estimates. In one case, we look at predeployment modeling to predict accuracy before any signals have been measured using a new concept of noisy reverse geocoding. In another case, we model a previously deployed system to predict its accuracy. The models allow us to explore the accuracy implications of signal noise, calibration effort, and quantization of signals and space.
{"title":"Location Accuracy Estimates for Signal Fingerprinting","authors":"John Krumm","doi":"10.1145/3397536.3422243","DOIUrl":"https://doi.org/10.1145/3397536.3422243","url":null,"abstract":"Location fingerprinting is a technique for determining the location of a device by measuring ambient signals such as radio signal strength, temperature, or any signal that varies with location. The accuracy of the technique is compromised by signal noise, quantization, and limited calibration resources. We develop generic, probabilistic models of location fingerprinting to find accuracy estimates. In one case, we look at predeployment modeling to predict accuracy before any signals have been measured using a new concept of noisy reverse geocoding. In another case, we model a previously deployed system to predict its accuracy. The models allow us to explore the accuracy implications of signal noise, calibration effort, and quantization of signals and space.","PeriodicalId":233918,"journal":{"name":"Proceedings of the 28th International Conference on Advances in Geographic Information Systems","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131003249","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Taehoon Kim, Wijae Cho, Akiyoshi Matono, Kyoung-Sook Kim
With the development of Light Detection and Ranging (LiDAR) technology, point cloud data is a valuable resource to build three-dimensional (3D) models of digital twins. The geospatial 3D model is the principal element to abstract a geographic feature with geometric and semantic properties. The 3D model data provides more efficiency to handle, retrieve, exchange, and visualize geographic features compared to point clouds. However, the construction of 3D models, especially indoor space where various objects exist, usually necessitates expensive time and manual labor resources to organize and extract the geometry information by authoring tools. This demonstration introduces Point-in Space-out (PinSout), a new framework to automatically generate 3D space models from raw 3D point cloud data by leveraging three open-source software: PointNet, Point Cloud Library (PCL), and 3D City Database (3DCityDB). The framework performs the semantic segmentation by PointNet, a deep learning algorithm for the point cloud, to assign a target label to each point from a point cloud, such as walls, floors, and ceilings. It then divides the point cloud into each label cluster and computes surface elements by PCL. Each surface is stored into a 3DCityDB database to export an OGC CityGML data. Finally, we evaluate the accuracy with two datasets: a synthetic point-cloud set of a 3D model and a real dataset taken from the exhibition halls.
{"title":"PinSout","authors":"Taehoon Kim, Wijae Cho, Akiyoshi Matono, Kyoung-Sook Kim","doi":"10.1145/3397536.3422343","DOIUrl":"https://doi.org/10.1145/3397536.3422343","url":null,"abstract":"With the development of Light Detection and Ranging (LiDAR) technology, point cloud data is a valuable resource to build three-dimensional (3D) models of digital twins. The geospatial 3D model is the principal element to abstract a geographic feature with geometric and semantic properties. The 3D model data provides more efficiency to handle, retrieve, exchange, and visualize geographic features compared to point clouds. However, the construction of 3D models, especially indoor space where various objects exist, usually necessitates expensive time and manual labor resources to organize and extract the geometry information by authoring tools. This demonstration introduces Point-in Space-out (PinSout), a new framework to automatically generate 3D space models from raw 3D point cloud data by leveraging three open-source software: PointNet, Point Cloud Library (PCL), and 3D City Database (3DCityDB). The framework performs the semantic segmentation by PointNet, a deep learning algorithm for the point cloud, to assign a target label to each point from a point cloud, such as walls, floors, and ceilings. It then divides the point cloud into each label cluster and computes surface elements by PCL. Each surface is stored into a 3DCityDB database to export an OGC CityGML data. Finally, we evaluate the accuracy with two datasets: a synthetic point-cloud set of a 3D model and a real dataset taken from the exhibition halls.","PeriodicalId":233918,"journal":{"name":"Proceedings of the 28th International Conference on Advances in Geographic Information Systems","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121349992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kunpeng Liu, Xiaolin Li, C. Zou, Haibo Huang, Yanjie Fu
In this paper, we solve the ambulance dispatch problem with a reinforcement learning oriented strategy. The ambulance dispatch problem is defined as deciding which ambulance to pick up which patient. Traditional studies on ambulance dispatch mainly focus on predefined protocols and are verified on simple simulation data, which are not flexible enough when facing the dynamically changing real-world cases. In this paper, we propose an efficient ambulance dispatch method based on the reinforcement learning framework, i.e., Multi-Agent Q-Network with Experience Replay(MAQR). Specifically, we firstly reformulate the ambulance dispatch problem with a multi-agent reinforcement learning framework, and then design the state, action, and reward function correspondingly for the framework. Thirdly, we design a simulator that controls ambulance status, generates patient requests and interacts with ambulances. Finally, we design extensive experiments to demonstrate the superiority of the proposed method.
在本文中,我们用一种面向强化学习的策略来解决救护车调度问题。救护车调度问题被定义为决定哪辆救护车接哪个病人。传统的救护车调度研究主要集中在预定义的协议上,并在简单的仿真数据上进行验证,在面对动态变化的现实情况时不够灵活。在本文中,我们提出了一种基于强化学习框架的高效救护车调度方法,即多agent Q-Network with Experience Replay(MAQR)。具体来说,我们首先用一个多智能体强化学习框架重新表述救护车调度问题,然后为该框架设计相应的状态函数、动作函数和奖励函数。第三,我们设计了一个模拟器来控制救护车状态,生成病人的请求,并与救护车进行交互。最后,我们设计了大量的实验来证明所提出方法的优越性。
{"title":"Ambulance Dispatch via Deep Reinforcement Learning","authors":"Kunpeng Liu, Xiaolin Li, C. Zou, Haibo Huang, Yanjie Fu","doi":"10.1145/3397536.3422204","DOIUrl":"https://doi.org/10.1145/3397536.3422204","url":null,"abstract":"In this paper, we solve the ambulance dispatch problem with a reinforcement learning oriented strategy. The ambulance dispatch problem is defined as deciding which ambulance to pick up which patient. Traditional studies on ambulance dispatch mainly focus on predefined protocols and are verified on simple simulation data, which are not flexible enough when facing the dynamically changing real-world cases. In this paper, we propose an efficient ambulance dispatch method based on the reinforcement learning framework, i.e., Multi-Agent Q-Network with Experience Replay(MAQR). Specifically, we firstly reformulate the ambulance dispatch problem with a multi-agent reinforcement learning framework, and then design the state, action, and reward function correspondingly for the framework. Thirdly, we design a simulator that controls ambulance status, generates patient requests and interacts with ambulances. Finally, we design extensive experiments to demonstrate the superiority of the proposed method.","PeriodicalId":233918,"journal":{"name":"Proceedings of the 28th International Conference on Advances in Geographic Information Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128570308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nowadays, the collection of moving object data is significantly increasing due to the ubiquity of GPS-enabled devices. Managing and analyzing this kind of data is crucial in many application domains, including social mobility, pandemics, and transportation. In previous work, we have proposed the MobilityDB moving object database system. It is a production-ready system, that is built on top of PostgreSQL and PostGIS. It accepts SQL queries and offers most of the common spatiotemporal types and operations. In this paper, to address the scalability requirement of big data, we provide an architecture and an implementation of a distributed moving object database system based on MobilityDB. More specifically, we define: (1) an architecture for deploying a distributed MobilityDB database on a cluster using readily available tools, (2) two alternative trajectory data partitioning and index partitioning methods, and (3) a query optimizer that is capable of distributing spatiotemporal SQL queries over multiple MobilityDB instances. The overall outcome is that the cluster is managed in SQL at the run-time and that the user queries are transparently distributed and executed. This is validated with experiments using a real dataset, which also compares MobilityDB with other relevant systems.
{"title":"Distributed Spatiotemporal Trajectory Query Processing in SQL","authors":"Mohamed S. Bakli, M. Sakr, E. Zimányi","doi":"10.1145/3397536.3422262","DOIUrl":"https://doi.org/10.1145/3397536.3422262","url":null,"abstract":"Nowadays, the collection of moving object data is significantly increasing due to the ubiquity of GPS-enabled devices. Managing and analyzing this kind of data is crucial in many application domains, including social mobility, pandemics, and transportation. In previous work, we have proposed the MobilityDB moving object database system. It is a production-ready system, that is built on top of PostgreSQL and PostGIS. It accepts SQL queries and offers most of the common spatiotemporal types and operations. In this paper, to address the scalability requirement of big data, we provide an architecture and an implementation of a distributed moving object database system based on MobilityDB. More specifically, we define: (1) an architecture for deploying a distributed MobilityDB database on a cluster using readily available tools, (2) two alternative trajectory data partitioning and index partitioning methods, and (3) a query optimizer that is capable of distributing spatiotemporal SQL queries over multiple MobilityDB instances. The overall outcome is that the cluster is managed in SQL at the run-time and that the user queries are transparently distributed and executed. This is validated with experiments using a real dataset, which also compares MobilityDB with other relevant systems.","PeriodicalId":233918,"journal":{"name":"Proceedings of the 28th International Conference on Advances in Geographic Information Systems","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114264865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}