首页 > 最新文献

Proceedings of the 28th International Conference on Advances in Geographic Information Systems最新文献

英文 中文
A Persistence-Based Approach for Individual Tree Mapping 基于持久性的单个树映射方法
Xin Xu, F. Iuricich, L. Floriani
Light Detection and Ranging (LiDAR) sensors generate dense point clouds that can be used to map forest structures at a high spatial resolution level. In this work, we consider the problem of identifying individual trees in a LiDAR point cloud. Existing techniques generally require intense parameter tuning and user interactions. Our goal is defining an automatic approach capable of providing robust results with minimal user interactions. To this end, we define a segmentation algorithm based on the watershed transform and persistence-based simplification. The proposed algorithm uses a divide-and-conquer technique to split a LiDAR point cloud into regions with uniform density. Within each region, single trees are identified by applying a segmentation approach based on watershed by simulated immersion. Experiments show that our approach performs better than state-of-the-art algorithms on most of the study areas in the benchmark provided by the NEW technologies for a better mountain FORest timber mobilization (NEWFOR) project. Moreover, our approach requires a single (Boolean) parameter. This makes our approach well suited for a wide range of forest analysis applications, including biomass estimation, or field inventory surveys.
光探测和测距(LiDAR)传感器产生密集的点云,可用于在高空间分辨率水平上绘制森林结构。在这项工作中,我们考虑了在激光雷达点云中识别单个树木的问题。现有的技术通常需要大量的参数调优和用户交互。我们的目标是定义一种能够以最少的用户交互提供健壮结果的自动方法。为此,我们定义了一种基于分水岭变换和基于持续化简的分割算法。该算法采用分而治之的方法将激光雷达点云分割成密度均匀的区域。在每个区域内,采用基于流域的模拟浸泡分割方法识别单株树木。实验表明,在为更好的山地森林木材动员(NEWFOR)项目提供的新技术基准中,我们的方法在大多数研究领域的表现优于最先进的算法。此外,我们的方法需要一个(布尔)参数。这使得我们的方法非常适合广泛的森林分析应用,包括生物量估算或实地调查。
{"title":"A Persistence-Based Approach for Individual Tree Mapping","authors":"Xin Xu, F. Iuricich, L. Floriani","doi":"10.1145/3397536.3422231","DOIUrl":"https://doi.org/10.1145/3397536.3422231","url":null,"abstract":"Light Detection and Ranging (LiDAR) sensors generate dense point clouds that can be used to map forest structures at a high spatial resolution level. In this work, we consider the problem of identifying individual trees in a LiDAR point cloud. Existing techniques generally require intense parameter tuning and user interactions. Our goal is defining an automatic approach capable of providing robust results with minimal user interactions. To this end, we define a segmentation algorithm based on the watershed transform and persistence-based simplification. The proposed algorithm uses a divide-and-conquer technique to split a LiDAR point cloud into regions with uniform density. Within each region, single trees are identified by applying a segmentation approach based on watershed by simulated immersion. Experiments show that our approach performs better than state-of-the-art algorithms on most of the study areas in the benchmark provided by the NEW technologies for a better mountain FORest timber mobilization (NEWFOR) project. Moreover, our approach requires a single (Boolean) parameter. This makes our approach well suited for a wide range of forest analysis applications, including biomass estimation, or field inventory surveys.","PeriodicalId":233918,"journal":{"name":"Proceedings of the 28th International Conference on Advances in Geographic Information Systems","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117092791","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Machine Learning on Satellite Radar Images to Estimate Damages After Natural Disasters 基于卫星雷达图像的机器学习估计自然灾害后的损失
Boyi Xie, Jeri Xu, Jungkyo Jung, S. Yun, Eric Zeng, E. Brooks, Michaela Dolk, Lokeshkumar Narasimhalu
Satellite radar imaging from SAR (Synthetic Aperture Radar) is a remote sensing technology that captures ground surface level changes at a relatively high resolution. This technology has been used in many applications, one of which is the estimation of damages after natural disasters, such as wildfire, earthquake, and hurricane events. An efficient and accurate assessment of damages after natural catastrophe events allows public and private sectors to quickly respond in order to mitigate losses and to better prepare for disaster relief. Advances in machine learning and image processing techniques can be applied to this dataset to survey large areas and estimate property damages. In this paper, we introduce a machine learning-based approach for taking satellite radar images and geographical data as inputs to classify the damage status of individual buildings after a major wildfire event. We believe the demonstration of this damage estimation methodology and its application to real world natural disaster events will have a high potential to improve social resilience.
SAR(合成孔径雷达)卫星雷达成像是一种以相对较高的分辨率捕获地表水平变化的遥感技术。该技术已被用于许多应用,其中之一是自然灾害(如野火、地震和飓风事件)后的损失估计。在自然灾害事件发生后对损失进行有效和准确的评估,使公共和私营部门能够迅速作出反应,以减轻损失并更好地为救灾做好准备。机器学习和图像处理技术的进步可以应用于该数据集,以调查大面积并估计财产损失。在本文中,我们引入了一种基于机器学习的方法,将卫星雷达图像和地理数据作为输入,对重大野火事件后单个建筑物的损坏状态进行分类。我们相信,这种损害估计方法的演示及其在现实世界自然灾害事件中的应用将具有提高社会复原力的巨大潜力。
{"title":"Machine Learning on Satellite Radar Images to Estimate Damages After Natural Disasters","authors":"Boyi Xie, Jeri Xu, Jungkyo Jung, S. Yun, Eric Zeng, E. Brooks, Michaela Dolk, Lokeshkumar Narasimhalu","doi":"10.1145/3397536.3422349","DOIUrl":"https://doi.org/10.1145/3397536.3422349","url":null,"abstract":"Satellite radar imaging from SAR (Synthetic Aperture Radar) is a remote sensing technology that captures ground surface level changes at a relatively high resolution. This technology has been used in many applications, one of which is the estimation of damages after natural disasters, such as wildfire, earthquake, and hurricane events. An efficient and accurate assessment of damages after natural catastrophe events allows public and private sectors to quickly respond in order to mitigate losses and to better prepare for disaster relief. Advances in machine learning and image processing techniques can be applied to this dataset to survey large areas and estimate property damages. In this paper, we introduce a machine learning-based approach for taking satellite radar images and geographical data as inputs to classify the damage status of individual buildings after a major wildfire event. We believe the demonstration of this damage estimation methodology and its application to real world natural disaster events will have a high potential to improve social resilience.","PeriodicalId":233918,"journal":{"name":"Proceedings of the 28th International Conference on Advances in Geographic Information Systems","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124849758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
A Semi-Automated System for Exploring and Fixing OSM Connectivity 一种半自动化的OSM连接探索和修复系统
Fares Tabet, Birva H. Patel, K. Dinçer, Harsh Govind, Peiwei Cao, Ashley Song, Mohamed H. Ali
As an open license project, Open Street Map (OSM) aims to make the collectively produced geographic data freely available to be used for various purposes. Routing engines frequently take advantage of this data set. Nonetheless, providing routing services on top of OSM requires the full connectivity of the OSM road network graph in the interest area. This connectivity needs to be achieved individually at every level of the road network graph: the motorway, trunk, primary, secondary, tertiary, and residential roads. However, due to its open-editing nature, the OSM data often contains faults attributed to issues like missing road network connections or mistakenly attributed road segments. In this paper, we demonstrate a system we have developed that helps the end-user (i.e., cartographer) discover and fix the connectivity errors in an OSM road network graph. More specifically, the system aims to achieve full connectivity in the overall road network graph, which in turn requires full connectivity at each road level. The system automatically detects the connectivity errors that would otherwise remain undetected or need a lengthy manual process to discover. It can accept hints from the editor through its easy to use graphical user interface to investigate errors further, improve the detection process, and subsequently fix them. Based on our pilot runs in New Zealand with the supervision of professional cartographers and a team from Microsoft Geospatial, we were able to detect more than 300 incorrect connections and to achieve connectivity across different road levels.
作为一个开放许可项目,开放街道地图(OSM)旨在使集体生成的地理数据可以免费用于各种目的。路由引擎经常利用这个数据集。尽管如此,在OSM之上提供路由服务需要在感兴趣的区域中实现OSM路网图的完全连接。这种连通性需要在道路网络图的每个层面单独实现:高速公路,干线,初级,二级,三级和住宅道路。然而,由于其开放编辑的性质,OSM数据经常包含由于缺少道路网络连接或错误划分道路段等问题而导致的错误。在本文中,我们展示了我们开发的一个系统,该系统可以帮助最终用户(即制图师)发现并修复OSM道路网络图中的连接错误。更具体地说,该系统旨在实现整个路网图的全连通,这反过来又要求每个道路级别的全连通。系统自动检测连接错误,否则这些错误将无法检测到,或者需要漫长的手动过程才能发现。它可以通过易于使用的图形用户界面接受编辑器的提示,进一步调查错误,改进检测过程,并随后修复它们。根据我们在新西兰的试点运行,在专业制图师和微软地理空间团队的监督下,我们能够检测到300多个错误的连接,并实现不同道路水平的连接。
{"title":"A Semi-Automated System for Exploring and Fixing OSM Connectivity","authors":"Fares Tabet, Birva H. Patel, K. Dinçer, Harsh Govind, Peiwei Cao, Ashley Song, Mohamed H. Ali","doi":"10.1145/3397536.3422347","DOIUrl":"https://doi.org/10.1145/3397536.3422347","url":null,"abstract":"As an open license project, Open Street Map (OSM) aims to make the collectively produced geographic data freely available to be used for various purposes. Routing engines frequently take advantage of this data set. Nonetheless, providing routing services on top of OSM requires the full connectivity of the OSM road network graph in the interest area. This connectivity needs to be achieved individually at every level of the road network graph: the motorway, trunk, primary, secondary, tertiary, and residential roads. However, due to its open-editing nature, the OSM data often contains faults attributed to issues like missing road network connections or mistakenly attributed road segments. In this paper, we demonstrate a system we have developed that helps the end-user (i.e., cartographer) discover and fix the connectivity errors in an OSM road network graph. More specifically, the system aims to achieve full connectivity in the overall road network graph, which in turn requires full connectivity at each road level. The system automatically detects the connectivity errors that would otherwise remain undetected or need a lengthy manual process to discover. It can accept hints from the editor through its easy to use graphical user interface to investigate errors further, improve the detection process, and subsequently fix them. Based on our pilot runs in New Zealand with the supervision of professional cartographers and a team from Microsoft Geospatial, we were able to detect more than 300 incorrect connections and to achieve connectivity across different road levels.","PeriodicalId":233918,"journal":{"name":"Proceedings of the 28th International Conference on Advances in Geographic Information Systems","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128921539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
(k, l)-Medians Clustering of Trajectories Using Continuous Dynamic Time Warping (k, l)-使用连续动态时间翘曲的轨迹中位数聚类
Milutin Brankovic, K. Buchin, Koen Klaren, A. Nusser, Aleksandr Popov, Sampson Wong
Due to the massively increasing amount of available geospatial data and the need to present it in an understandable way, clustering this data is more important than ever. As clusters might contain a large number of objects, having a representative for each cluster significantly facilitates understanding a clustering. Clustering methods relying on such representatives are called center-based. In this work we consider the problem of center-based clustering of trajectories. In this setting, the representative of a cluster is again a trajectory. To obtain a compact representation of the clusters and to avoid overfitting, we restrict the complexity of the representative trajectories by a parameter l. This restriction, however, makes discrete distance measures like dynamic time warping (DTW) less suited. There is recent work on center-based clustering of trajectories with a continuous distance measure, namely, the Fréchet distance. While the Fréchet distance allows for restriction of the center complexity, it can also be sensitive to outliers, whereas averaging-type distance measures, like DTW, are less so. To obtain a trajectory clustering algorithm that allows restricting center complexity and is more robust to outliers, we propose the usage of a continuous version of DTW as distance measure, which we call continuous dynamic time warping (CDTW). Our contribution is twofold: (1) To combat the lack of practical algorithms for CDTW, we develop an approximation algorithm that computes it. (2) We develop the first clustering algorithm under this distance measure and show a practical way to compute a center from a set of trajectories and subsequently iteratively improve it. To obtain insights into the results of clustering under CDTW on practical data, we conduct extensive experiments.
由于可用地理空间数据的数量大量增加,并且需要以可理解的方式呈现这些数据,因此对这些数据进行聚类比以往任何时候都更加重要。由于集群可能包含大量对象,因此每个集群都有一个代表可以极大地促进对集群的理解。依赖于这些代表的聚类方法被称为基于中心的方法。在这项工作中,我们考虑了基于中心的轨迹聚类问题。在这种情况下,集群的代表也是一条轨迹。为了获得集群的紧凑表示并避免过拟合,我们通过参数l来限制代表性轨迹的复杂性。然而,这种限制使得像动态时间翘曲(DTW)这样的离散距离度量不太适合。最近有关于连续距离测量的基于中心的轨迹聚类的工作,即fr切特距离。虽然fracimchet距离允许限制中心复杂性,但它也可能对异常值敏感,而平均类型的距离度量,如DTW,则不那么敏感。为了获得一种允许限制中心复杂度并对异常值更具鲁棒性的轨迹聚类算法,我们提出使用连续版本的DTW作为距离度量,我们称之为连续动态时间规整(CDTW)。我们的贡献是双重的:(1)为了解决缺乏实用的CDTW算法的问题,我们开发了一个近似算法来计算它。(2)在此距离度量下,我们开发了第一个聚类算法,并展示了一种从一组轨迹中计算中心并随后迭代改进的实用方法。为了深入了解CDTW下对实际数据的聚类结果,我们进行了大量的实验。
{"title":"(k, l)-Medians Clustering of Trajectories Using Continuous Dynamic Time Warping","authors":"Milutin Brankovic, K. Buchin, Koen Klaren, A. Nusser, Aleksandr Popov, Sampson Wong","doi":"10.1145/3397536.3422245","DOIUrl":"https://doi.org/10.1145/3397536.3422245","url":null,"abstract":"Due to the massively increasing amount of available geospatial data and the need to present it in an understandable way, clustering this data is more important than ever. As clusters might contain a large number of objects, having a representative for each cluster significantly facilitates understanding a clustering. Clustering methods relying on such representatives are called center-based. In this work we consider the problem of center-based clustering of trajectories. In this setting, the representative of a cluster is again a trajectory. To obtain a compact representation of the clusters and to avoid overfitting, we restrict the complexity of the representative trajectories by a parameter l. This restriction, however, makes discrete distance measures like dynamic time warping (DTW) less suited. There is recent work on center-based clustering of trajectories with a continuous distance measure, namely, the Fréchet distance. While the Fréchet distance allows for restriction of the center complexity, it can also be sensitive to outliers, whereas averaging-type distance measures, like DTW, are less so. To obtain a trajectory clustering algorithm that allows restricting center complexity and is more robust to outliers, we propose the usage of a continuous version of DTW as distance measure, which we call continuous dynamic time warping (CDTW). Our contribution is twofold: (1) To combat the lack of practical algorithms for CDTW, we develop an approximation algorithm that computes it. (2) We develop the first clustering algorithm under this distance measure and show a practical way to compute a center from a set of trajectories and subsequently iteratively improve it. To obtain insights into the results of clustering under CDTW on practical data, we conduct extensive experiments.","PeriodicalId":233918,"journal":{"name":"Proceedings of the 28th International Conference on Advances in Geographic Information Systems","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121844773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Turbo-GTS: Scaling Mobile Crowdsourcing using Workload-Balancing Bisection Tree Turbo-GTS:使用工作负载平衡平分树扩展移动众包
W. Li, Haiquan Chen, Wei-Shinn Ku, X. Qin
In mobile crowdsourcing, workers are financially motivated to perform self-selected tasks to maximize their revenue. Unfortunately, the existing task scheduling approaches in mobile crowdsourcing fail to scale for massive tasks and large geographic areas. We present Turbo-GTS, a system that assigns tasks to each worker to maximize the total number of the tasks that can be completed for an entire worker group while taking into account various spatial and temporal constraints, such as task execution duration, task expiration time, and worker/task geographic locations. The core of Turbo-GTS is WBT-NNH and WBT-NUD, our two newly developed scheduling algorithms, which build on the algorithms, QT-NNH and QT-NUD, proposed in our prior work [5]. The key idea is that Turbo-GTS performs dynamic workload balancing among all workers using the proposed Workload-balancing Bisection Tree (WBT) in support of large-scale Geo-Task Scheduling (GTS). Turbo-GTS includes an interactive interface for users to load the current task/worker distributions and compare the task assignment of each worker returned by different algorithms in a real-time fashion. Using the Foursquare mobile user check-in data in New York City and Tokyo, we show the superiority of Turbo-GTS over the state of the art in terms of the total number of the tasks that can be accomplished by the entire worker group and the corresponding running time. We also demonstrate the front-end interface of Turbo-GTS with two exploratory use cases in New York City.
在移动众包中,员工有经济动机去执行自己选择的任务,以最大化他们的收入。遗憾的是,现有的移动众包任务调度方法无法适用于大量任务和大地理区域。我们介绍了Turbo-GTS,这是一个将任务分配给每个工人以最大化整个工人组可以完成的任务总数的系统,同时考虑到各种空间和时间限制,如任务执行时间、任务到期时间和工人/任务地理位置。Turbo-GTS的核心是WBT-NNH和WBT-NUD,这是我们在之前的工作[5]中提出的QT-NNH和QT-NUD算法的基础上新开发的两种调度算法。其关键思想是,Turbo-GTS使用工作负载平衡二分树(WBT)在所有工作人员之间执行动态工作负载平衡,以支持大规模的地理任务调度(GTS)。Turbo-GTS包括一个交互式界面,供用户加载当前的任务/worker分布,并实时比较由不同算法返回的每个worker的任务分配。利用纽约和东京的Foursquare移动用户签到数据,我们展示了Turbo-GTS在整个工作小组可以完成的任务总数和相应的运行时间方面的优势。我们还在纽约市用两个探索性用例演示了Turbo-GTS的前端界面。
{"title":"Turbo-GTS: Scaling Mobile Crowdsourcing using Workload-Balancing Bisection Tree","authors":"W. Li, Haiquan Chen, Wei-Shinn Ku, X. Qin","doi":"10.1145/3397536.3422335","DOIUrl":"https://doi.org/10.1145/3397536.3422335","url":null,"abstract":"In mobile crowdsourcing, workers are financially motivated to perform self-selected tasks to maximize their revenue. Unfortunately, the existing task scheduling approaches in mobile crowdsourcing fail to scale for massive tasks and large geographic areas. We present Turbo-GTS, a system that assigns tasks to each worker to maximize the total number of the tasks that can be completed for an entire worker group while taking into account various spatial and temporal constraints, such as task execution duration, task expiration time, and worker/task geographic locations. The core of Turbo-GTS is WBT-NNH and WBT-NUD, our two newly developed scheduling algorithms, which build on the algorithms, QT-NNH and QT-NUD, proposed in our prior work [5]. The key idea is that Turbo-GTS performs dynamic workload balancing among all workers using the proposed Workload-balancing Bisection Tree (WBT) in support of large-scale Geo-Task Scheduling (GTS). Turbo-GTS includes an interactive interface for users to load the current task/worker distributions and compare the task assignment of each worker returned by different algorithms in a real-time fashion. Using the Foursquare mobile user check-in data in New York City and Tokyo, we show the superiority of Turbo-GTS over the state of the art in terms of the total number of the tasks that can be accomplished by the entire worker group and the corresponding running time. We also demonstrate the front-end interface of Turbo-GTS with two exploratory use cases in New York City.","PeriodicalId":233918,"journal":{"name":"Proceedings of the 28th International Conference on Advances in Geographic Information Systems","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121276590","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Graph Convolutional Networks with Kalman Filtering for Traffic Prediction 基于卡尔曼滤波的图卷积网络交通预测
Fanglan Chen, Zhiqian Chen, Subhodip Biswas, Shuo Lei, Naren Ramakrishnan, Chang-Tien Lu
Traffic prediction is a challenging task due to the time-varying nature of traffic patterns and the complex spatial dependency of road networks. Adding to the challenge, there are a number of errors introduced in traffic sensor reporting, including bias and noise. However, most of the previous works treat the sensor observations as exact measures ignoring the effect of unknown noise. To model the spatial and temporal dependencies, existing studies combine graph neural networks (GNNs) with other deep learning techniques but their equal weighting of different dependencies limits the models' ability to capture the real dynamics in the traffic network. To deal with the above issues, we propose a novel deep learning framework called Deep Kalman Filtering Network (DKFN) to forecast the network-wide traffic state by modeling the self and neighbor dependencies as two streams, and their predictions are fused under the statistical theory and optimized through the Kalman filtering network. First, the reliability of each stream is evaluated using variances. Then, the Kalman filter is leveraged to properly fuse noisy observations in terms of their reliability. Experimental results reflect the superiority of the proposed method over baseline models on two real-world traffic datasets in the speed prediction task.
由于交通模式的时变性质和道路网络的复杂空间依赖性,交通预测是一项具有挑战性的任务。更大的挑战是,在交通传感器报告中引入了许多错误,包括偏差和噪声。然而,以往的工作大多将传感器观测作为精确测量,忽略了未知噪声的影响。为了对空间和时间依赖关系进行建模,现有的研究将图神经网络(gnn)与其他深度学习技术相结合,但它们对不同依赖关系的同等权重限制了模型捕捉交通网络中真实动态的能力。为了解决上述问题,我们提出了一种新的深度学习框架——深度卡尔曼滤波网络(deep Kalman Filtering Network, DKFN),该框架通过将自身和邻居依赖关系建模为两个流来预测整个网络的流量状态,并在统计理论下融合它们的预测,并通过卡尔曼滤波网络进行优化。首先,使用方差来评估每个流的可靠性。然后,利用卡尔曼滤波器在可靠性方面适当地融合噪声观测。在两个真实交通数据集上的实验结果表明,该方法在速度预测任务中优于基线模型。
{"title":"Graph Convolutional Networks with Kalman Filtering for Traffic Prediction","authors":"Fanglan Chen, Zhiqian Chen, Subhodip Biswas, Shuo Lei, Naren Ramakrishnan, Chang-Tien Lu","doi":"10.1145/3397536.3422257","DOIUrl":"https://doi.org/10.1145/3397536.3422257","url":null,"abstract":"Traffic prediction is a challenging task due to the time-varying nature of traffic patterns and the complex spatial dependency of road networks. Adding to the challenge, there are a number of errors introduced in traffic sensor reporting, including bias and noise. However, most of the previous works treat the sensor observations as exact measures ignoring the effect of unknown noise. To model the spatial and temporal dependencies, existing studies combine graph neural networks (GNNs) with other deep learning techniques but their equal weighting of different dependencies limits the models' ability to capture the real dynamics in the traffic network. To deal with the above issues, we propose a novel deep learning framework called Deep Kalman Filtering Network (DKFN) to forecast the network-wide traffic state by modeling the self and neighbor dependencies as two streams, and their predictions are fused under the statistical theory and optimized through the Kalman filtering network. First, the reliability of each stream is evaluated using variances. Then, the Kalman filter is leveraged to properly fuse noisy observations in terms of their reliability. Experimental results reflect the superiority of the proposed method over baseline models on two real-world traffic datasets in the speed prediction task.","PeriodicalId":233918,"journal":{"name":"Proceedings of the 28th International Conference on Advances in Geographic Information Systems","volume":"35 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126771059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 24
Location Accuracy Estimates for Signal Fingerprinting 定位精度估计的信号指纹
John Krumm
Location fingerprinting is a technique for determining the location of a device by measuring ambient signals such as radio signal strength, temperature, or any signal that varies with location. The accuracy of the technique is compromised by signal noise, quantization, and limited calibration resources. We develop generic, probabilistic models of location fingerprinting to find accuracy estimates. In one case, we look at predeployment modeling to predict accuracy before any signals have been measured using a new concept of noisy reverse geocoding. In another case, we model a previously deployed system to predict its accuracy. The models allow us to explore the accuracy implications of signal noise, calibration effort, and quantization of signals and space.
位置指纹是一种通过测量环境信号(如无线电信号强度、温度或任何随位置变化的信号)来确定设备位置的技术。该技术的准确性受到信号噪声、量化和有限校准资源的影响。我们开发了通用的,概率模型的位置指纹,以找到准确性估计。在一种情况下,我们使用噪声反向地理编码的新概念,在测量任何信号之前,通过预部署建模来预测准确性。在另一种情况下,我们对先前部署的系统建模以预测其准确性。这些模型使我们能够探索信号噪声、校准努力以及信号和空间量化的精度含义。
{"title":"Location Accuracy Estimates for Signal Fingerprinting","authors":"John Krumm","doi":"10.1145/3397536.3422243","DOIUrl":"https://doi.org/10.1145/3397536.3422243","url":null,"abstract":"Location fingerprinting is a technique for determining the location of a device by measuring ambient signals such as radio signal strength, temperature, or any signal that varies with location. The accuracy of the technique is compromised by signal noise, quantization, and limited calibration resources. We develop generic, probabilistic models of location fingerprinting to find accuracy estimates. In one case, we look at predeployment modeling to predict accuracy before any signals have been measured using a new concept of noisy reverse geocoding. In another case, we model a previously deployed system to predict its accuracy. The models allow us to explore the accuracy implications of signal noise, calibration effort, and quantization of signals and space.","PeriodicalId":233918,"journal":{"name":"Proceedings of the 28th International Conference on Advances in Geographic Information Systems","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131003249","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
PinSout PinSout
Taehoon Kim, Wijae Cho, Akiyoshi Matono, Kyoung-Sook Kim
With the development of Light Detection and Ranging (LiDAR) technology, point cloud data is a valuable resource to build three-dimensional (3D) models of digital twins. The geospatial 3D model is the principal element to abstract a geographic feature with geometric and semantic properties. The 3D model data provides more efficiency to handle, retrieve, exchange, and visualize geographic features compared to point clouds. However, the construction of 3D models, especially indoor space where various objects exist, usually necessitates expensive time and manual labor resources to organize and extract the geometry information by authoring tools. This demonstration introduces Point-in Space-out (PinSout), a new framework to automatically generate 3D space models from raw 3D point cloud data by leveraging three open-source software: PointNet, Point Cloud Library (PCL), and 3D City Database (3DCityDB). The framework performs the semantic segmentation by PointNet, a deep learning algorithm for the point cloud, to assign a target label to each point from a point cloud, such as walls, floors, and ceilings. It then divides the point cloud into each label cluster and computes surface elements by PCL. Each surface is stored into a 3DCityDB database to export an OGC CityGML data. Finally, we evaluate the accuracy with two datasets: a synthetic point-cloud set of a 3D model and a real dataset taken from the exhibition halls.
{"title":"PinSout","authors":"Taehoon Kim, Wijae Cho, Akiyoshi Matono, Kyoung-Sook Kim","doi":"10.1145/3397536.3422343","DOIUrl":"https://doi.org/10.1145/3397536.3422343","url":null,"abstract":"With the development of Light Detection and Ranging (LiDAR) technology, point cloud data is a valuable resource to build three-dimensional (3D) models of digital twins. The geospatial 3D model is the principal element to abstract a geographic feature with geometric and semantic properties. The 3D model data provides more efficiency to handle, retrieve, exchange, and visualize geographic features compared to point clouds. However, the construction of 3D models, especially indoor space where various objects exist, usually necessitates expensive time and manual labor resources to organize and extract the geometry information by authoring tools. This demonstration introduces Point-in Space-out (PinSout), a new framework to automatically generate 3D space models from raw 3D point cloud data by leveraging three open-source software: PointNet, Point Cloud Library (PCL), and 3D City Database (3DCityDB). The framework performs the semantic segmentation by PointNet, a deep learning algorithm for the point cloud, to assign a target label to each point from a point cloud, such as walls, floors, and ceilings. It then divides the point cloud into each label cluster and computes surface elements by PCL. Each surface is stored into a 3DCityDB database to export an OGC CityGML data. Finally, we evaluate the accuracy with two datasets: a synthetic point-cloud set of a 3D model and a real dataset taken from the exhibition halls.","PeriodicalId":233918,"journal":{"name":"Proceedings of the 28th International Conference on Advances in Geographic Information Systems","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121349992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Ambulance Dispatch via Deep Reinforcement Learning 通过深度强化学习调度救护车
Kunpeng Liu, Xiaolin Li, C. Zou, Haibo Huang, Yanjie Fu
In this paper, we solve the ambulance dispatch problem with a reinforcement learning oriented strategy. The ambulance dispatch problem is defined as deciding which ambulance to pick up which patient. Traditional studies on ambulance dispatch mainly focus on predefined protocols and are verified on simple simulation data, which are not flexible enough when facing the dynamically changing real-world cases. In this paper, we propose an efficient ambulance dispatch method based on the reinforcement learning framework, i.e., Multi-Agent Q-Network with Experience Replay(MAQR). Specifically, we firstly reformulate the ambulance dispatch problem with a multi-agent reinforcement learning framework, and then design the state, action, and reward function correspondingly for the framework. Thirdly, we design a simulator that controls ambulance status, generates patient requests and interacts with ambulances. Finally, we design extensive experiments to demonstrate the superiority of the proposed method.
在本文中,我们用一种面向强化学习的策略来解决救护车调度问题。救护车调度问题被定义为决定哪辆救护车接哪个病人。传统的救护车调度研究主要集中在预定义的协议上,并在简单的仿真数据上进行验证,在面对动态变化的现实情况时不够灵活。在本文中,我们提出了一种基于强化学习框架的高效救护车调度方法,即多agent Q-Network with Experience Replay(MAQR)。具体来说,我们首先用一个多智能体强化学习框架重新表述救护车调度问题,然后为该框架设计相应的状态函数、动作函数和奖励函数。第三,我们设计了一个模拟器来控制救护车状态,生成病人的请求,并与救护车进行交互。最后,我们设计了大量的实验来证明所提出方法的优越性。
{"title":"Ambulance Dispatch via Deep Reinforcement Learning","authors":"Kunpeng Liu, Xiaolin Li, C. Zou, Haibo Huang, Yanjie Fu","doi":"10.1145/3397536.3422204","DOIUrl":"https://doi.org/10.1145/3397536.3422204","url":null,"abstract":"In this paper, we solve the ambulance dispatch problem with a reinforcement learning oriented strategy. The ambulance dispatch problem is defined as deciding which ambulance to pick up which patient. Traditional studies on ambulance dispatch mainly focus on predefined protocols and are verified on simple simulation data, which are not flexible enough when facing the dynamically changing real-world cases. In this paper, we propose an efficient ambulance dispatch method based on the reinforcement learning framework, i.e., Multi-Agent Q-Network with Experience Replay(MAQR). Specifically, we firstly reformulate the ambulance dispatch problem with a multi-agent reinforcement learning framework, and then design the state, action, and reward function correspondingly for the framework. Thirdly, we design a simulator that controls ambulance status, generates patient requests and interacts with ambulances. Finally, we design extensive experiments to demonstrate the superiority of the proposed method.","PeriodicalId":233918,"journal":{"name":"Proceedings of the 28th International Conference on Advances in Geographic Information Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128570308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Distributed Spatiotemporal Trajectory Query Processing in SQL SQL中的分布式时空轨迹查询处理
Mohamed S. Bakli, M. Sakr, E. Zimányi
Nowadays, the collection of moving object data is significantly increasing due to the ubiquity of GPS-enabled devices. Managing and analyzing this kind of data is crucial in many application domains, including social mobility, pandemics, and transportation. In previous work, we have proposed the MobilityDB moving object database system. It is a production-ready system, that is built on top of PostgreSQL and PostGIS. It accepts SQL queries and offers most of the common spatiotemporal types and operations. In this paper, to address the scalability requirement of big data, we provide an architecture and an implementation of a distributed moving object database system based on MobilityDB. More specifically, we define: (1) an architecture for deploying a distributed MobilityDB database on a cluster using readily available tools, (2) two alternative trajectory data partitioning and index partitioning methods, and (3) a query optimizer that is capable of distributing spatiotemporal SQL queries over multiple MobilityDB instances. The overall outcome is that the cluster is managed in SQL at the run-time and that the user queries are transparently distributed and executed. This is validated with experiments using a real dataset, which also compares MobilityDB with other relevant systems.
如今,由于支持gps的设备无处不在,移动对象数据的收集正在显著增加。管理和分析这类数据在许多应用领域至关重要,包括社会流动性、流行病和交通运输。在之前的工作中,我们提出了MobilityDB移动对象数据库系统。它是一个生产就绪的系统,建立在PostgreSQL和PostGIS之上。它接受SQL查询,并提供大多数常见的时空类型和操作。本文针对大数据的可扩展性需求,提出了一种基于MobilityDB的分布式移动对象数据库系统的体系结构和实现方法。更具体地说,我们定义:(1)使用现成的工具在集群上部署分布式MobilityDB数据库的架构,(2)两种可选的轨迹数据分区和索引分区方法,以及(3)能够在多个MobilityDB实例上分布时空SQL查询的查询优化器。总体结果是,集群在运行时用SQL管理,用户查询透明地分布和执行。使用真实数据集的实验验证了这一点,该数据集还将MobilityDB与其他相关系统进行了比较。
{"title":"Distributed Spatiotemporal Trajectory Query Processing in SQL","authors":"Mohamed S. Bakli, M. Sakr, E. Zimányi","doi":"10.1145/3397536.3422262","DOIUrl":"https://doi.org/10.1145/3397536.3422262","url":null,"abstract":"Nowadays, the collection of moving object data is significantly increasing due to the ubiquity of GPS-enabled devices. Managing and analyzing this kind of data is crucial in many application domains, including social mobility, pandemics, and transportation. In previous work, we have proposed the MobilityDB moving object database system. It is a production-ready system, that is built on top of PostgreSQL and PostGIS. It accepts SQL queries and offers most of the common spatiotemporal types and operations. In this paper, to address the scalability requirement of big data, we provide an architecture and an implementation of a distributed moving object database system based on MobilityDB. More specifically, we define: (1) an architecture for deploying a distributed MobilityDB database on a cluster using readily available tools, (2) two alternative trajectory data partitioning and index partitioning methods, and (3) a query optimizer that is capable of distributing spatiotemporal SQL queries over multiple MobilityDB instances. The overall outcome is that the cluster is managed in SQL at the run-time and that the user queries are transparently distributed and executed. This is validated with experiments using a real dataset, which also compares MobilityDB with other relevant systems.","PeriodicalId":233918,"journal":{"name":"Proceedings of the 28th International Conference on Advances in Geographic Information Systems","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114264865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
期刊
Proceedings of the 28th International Conference on Advances in Geographic Information Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1