首页 > 最新文献

Proceedings of the 24th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems最新文献

英文 中文
A demonstration of SHAREK: an efficient matching framework for ride sharing systems SHAREK的演示:一个有效的拼车系统匹配框架
Louai Alarabi, Bin Cao, Liwei Zhao, M. Mokbel, Anas Basalamah
Recently, many ride sharing systems have been commercially introduced (e.g., Uber, Flinc, and Lyft) forming a multi-billion dollars industry. The main idea is to match people requesting a certain ride to other people who are acting as drivers on their own spare time. The matching algorithm run by these services is very simple and ignores a wide sector of users who can be exploited to maximize the benefits of these services. In this demo, we demonstrate SHAREK; a driver-rider matching algorithm that can be embedded inside existing ride sharing services to enhance the quality of their matching. SHAREK has the potential to boost the performance and widen the user base and applicability of existing ride sharing services. This is mainly because within its matching technique, SHAREK takes into account user preferences in terms of maximum waiting time the rider is willing to have before being picked up as well as the maximum cost that the rider is willing to pay. Then, within its course of execution, SHAREK applies a set of smart filters that enable it to do the matching so efficiently without the need to many expensive shortest path computations.
最近,许多拼车系统已经被商业化引入(例如Uber、Flinc和Lyft),形成了一个数十亿美元的产业。其主要想法是将要求乘车的人与其他在业余时间充当司机的人匹配起来。这些服务运行的匹配算法非常简单,忽略了可以利用这些服务最大化利益的广大用户。在这个演示中,我们演示了SHAREK;一种司机-乘客匹配算法,可以嵌入到现有的拼车服务中,以提高匹配的质量。SHAREK有潜力提高现有拼车服务的性能,扩大用户基础和适用性。这主要是因为在其匹配技术中,SHAREK考虑了用户的偏好,即乘客在被接走之前愿意等待的最长时间以及乘客愿意支付的最大成本。然后,在执行过程中,SHAREK应用一组智能过滤器,使它能够如此高效地进行匹配,而不需要许多昂贵的最短路径计算。
{"title":"A demonstration of SHAREK: an efficient matching framework for ride sharing systems","authors":"Louai Alarabi, Bin Cao, Liwei Zhao, M. Mokbel, Anas Basalamah","doi":"10.1145/2996913.2996983","DOIUrl":"https://doi.org/10.1145/2996913.2996983","url":null,"abstract":"Recently, many ride sharing systems have been commercially introduced (e.g., Uber, Flinc, and Lyft) forming a multi-billion dollars industry. The main idea is to match people requesting a certain ride to other people who are acting as drivers on their own spare time. The matching algorithm run by these services is very simple and ignores a wide sector of users who can be exploited to maximize the benefits of these services. In this demo, we demonstrate SHAREK; a driver-rider matching algorithm that can be embedded inside existing ride sharing services to enhance the quality of their matching. SHAREK has the potential to boost the performance and widen the user base and applicability of existing ride sharing services. This is mainly because within its matching technique, SHAREK takes into account user preferences in terms of maximum waiting time the rider is willing to have before being picked up as well as the maximum cost that the rider is willing to pay. Then, within its course of execution, SHAREK applies a set of smart filters that enable it to do the matching so efficiently without the need to many expensive shortest path computations.","PeriodicalId":20525,"journal":{"name":"Proceedings of the 24th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2016-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82427406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
A framework for updating multi-criteria optimal location query (demo paper) 多准则最优位置查询的更新框架(演示论文)
P. Harn, Ji Zhang, Min-Te Sun, Wei-Shinn Ku
A variety of optimal location problems have been extensively studied in the literature. However, limited visualization systems have been developed for illustrating optimal location selection process. In this demonstration, we present a system that visualizes an advanced solution that can efficiently answer multi-criteria optimal location updating query by incrementally updating the Minimum Overlapping Voronoi Diagram (MOVD) model. Not only does our system display an example as a practical multi-criteria optimal location updating query, but also visualizes the process of the query evaluation in a more intuitive manner. With the object insertion and deletion operations defined over the MOVD model, any object changes in an MOVD can be represented by removing the objects from initial datasets and adding them back with updated attributes. Moreover, Haxe toolkit is used to provide friendly and flexible user interfaces in our system.
各种最优定位问题在文献中得到了广泛的研究。然而,用于说明最佳位置选择过程的可视化系统有限。在这个演示中,我们展示了一个可视化的高级解决方案,该解决方案可以通过增量更新最小重叠Voronoi图(MOVD)模型有效地回答多标准最优位置更新查询。该系统不仅展示了一个实用的多准则最优位置更新查询示例,而且更直观地将查询评估过程可视化。使用在MOVD模型上定义的对象插入和删除操作,可以通过从初始数据集中删除对象并将其与更新的属性一起添加回来来表示MOVD中的任何对象更改。此外,在我们的系统中使用Haxe工具包来提供友好和灵活的用户界面。
{"title":"A framework for updating multi-criteria optimal location query (demo paper)","authors":"P. Harn, Ji Zhang, Min-Te Sun, Wei-Shinn Ku","doi":"10.1145/2996913.2997012","DOIUrl":"https://doi.org/10.1145/2996913.2997012","url":null,"abstract":"A variety of optimal location problems have been extensively studied in the literature. However, limited visualization systems have been developed for illustrating optimal location selection process. In this demonstration, we present a system that visualizes an advanced solution that can efficiently answer multi-criteria optimal location updating query by incrementally updating the Minimum Overlapping Voronoi Diagram (MOVD) model. Not only does our system display an example as a practical multi-criteria optimal location updating query, but also visualizes the process of the query evaluation in a more intuitive manner. With the object insertion and deletion operations defined over the MOVD model, any object changes in an MOVD can be represented by removing the objects from initial datasets and adding them back with updated attributes. Moreover, Haxe toolkit is used to provide friendly and flexible user interfaces in our system.","PeriodicalId":20525,"journal":{"name":"Proceedings of the 24th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2016-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84133755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
SAGEL: smart address geocoding engine for supply-chain logistics SAGEL:供应链物流的智能地址地理编码引擎
Abhranil Chatterjee, Janit Anjaria, Sourav Roy, A. Ganguli, K. Seal
With the recent explosion of e-commerce industry in India, the problem of address geocoding, that is, transforming textual address descriptions to geographic reference, such as latitude, longitude coordinates, has emerged as a core problem for supply chain management. Some of the major areas that rely on precise and accurate address geocoding are supply chain fulfilment, supply chain analytics and logistics. In this paper, we present some of the challenges faced in practice while building an address geocoding engine as a core capability at Flipkart. We discuss the unique challenges of building a geocoding engine for a rapidly developing country like India, such as, fuzzy region boundaries, dynamic topography and lack of convention in spellings of toponyms, to name a few. We motivate the need for building a reliable and precise address geocoding system from a business perspective and argue why some of the commercially available solutions do not suffice for our requirements. SAGEL has evolved through 3 cycles of solution prototypes and pilot experiments. We describe the learnings from each of these phases and how we incorporated them to get to the first production-ready version. We describe how we store and index map data on a SolrCloud cluster of Apache Solr, an open-source search platform, and the core algorithm for geocoding which works post-retrieval in order to determine the best matches among a set of candidate results. We give a brief description of the system architecture and provide accuracy results of our geocoding engine by measuring deviations of geocoded customer addresses across India, from verified latitude, longitude coordinates of those addresses, for a sizeable address set. We also measure and report our system's ability to geocode up to different region levels, like city, locality or building. We compare our results with those of the geocoding service provided by Google against a set of addresses for which we have verified latitude-longitude coordinates and show that our geocoding engine is almost as accurate as Google's, while having a higher coverage.
随着近年来印度电子商务行业的爆炸式发展,地址地理编码问题,即将文本地址描述转换为地理参考,如纬度、经度坐标,已经成为供应链管理的核心问题。依赖于精确和准确的地址地理编码的一些主要领域是供应链履行、供应链分析和物流。在本文中,我们提出了在构建地址地理编码引擎作为Flipkart核心功能时在实践中面临的一些挑战。我们讨论了为印度这样一个快速发展的国家构建地理编码引擎所面临的独特挑战,例如,模糊的区域边界、动态地形和缺乏地名拼写的惯例,等等。我们从业务的角度激发了构建可靠和精确的地址地理编码系统的需求,并讨论了为什么一些商业上可用的解决方案不能满足我们的需求。SAGEL经历了3个解决方案原型和试点实验的周期。我们描述了从每个阶段学到的东西,以及我们如何将它们整合到第一个生产就绪版本中。我们描述了如何在Apache Solr(一个开源搜索平台)的SolrCloud集群上存储和索引地图数据,以及地理编码的核心算法,该算法在检索后工作,以便在一组候选结果中确定最佳匹配。我们给出了系统架构的简要描述,并通过测量印度各地地理编码客户地址的偏差,从这些地址的经过验证的经纬度坐标,为相当大的地址集提供了地理编码引擎的准确性结果。我们还测量并报告系统在不同区域级别(如城市、地区或建筑物)进行地理编码的能力。我们将我们的结果与Google提供的地理编码服务的结果进行比较,并对一组我们已经验证了经纬度坐标的地址进行比较,结果表明我们的地理编码引擎几乎和Google的一样准确,同时具有更高的覆盖率。
{"title":"SAGEL: smart address geocoding engine for supply-chain logistics","authors":"Abhranil Chatterjee, Janit Anjaria, Sourav Roy, A. Ganguli, K. Seal","doi":"10.1145/2996913.2996917","DOIUrl":"https://doi.org/10.1145/2996913.2996917","url":null,"abstract":"With the recent explosion of e-commerce industry in India, the problem of address geocoding, that is, transforming textual address descriptions to geographic reference, such as latitude, longitude coordinates, has emerged as a core problem for supply chain management. Some of the major areas that rely on precise and accurate address geocoding are supply chain fulfilment, supply chain analytics and logistics. In this paper, we present some of the challenges faced in practice while building an address geocoding engine as a core capability at Flipkart. We discuss the unique challenges of building a geocoding engine for a rapidly developing country like India, such as, fuzzy region boundaries, dynamic topography and lack of convention in spellings of toponyms, to name a few. We motivate the need for building a reliable and precise address geocoding system from a business perspective and argue why some of the commercially available solutions do not suffice for our requirements. SAGEL has evolved through 3 cycles of solution prototypes and pilot experiments. We describe the learnings from each of these phases and how we incorporated them to get to the first production-ready version. We describe how we store and index map data on a SolrCloud cluster of Apache Solr, an open-source search platform, and the core algorithm for geocoding which works post-retrieval in order to determine the best matches among a set of candidate results. We give a brief description of the system architecture and provide accuracy results of our geocoding engine by measuring deviations of geocoded customer addresses across India, from verified latitude, longitude coordinates of those addresses, for a sizeable address set. We also measure and report our system's ability to geocode up to different region levels, like city, locality or building. We compare our results with those of the geocoding service provided by Google against a set of addresses for which we have verified latitude-longitude coordinates and show that our geocoding engine is almost as accurate as Google's, while having a higher coverage.","PeriodicalId":20525,"journal":{"name":"Proceedings of the 24th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2016-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80687089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
FCCF: forecasting citywide crowd flows based on big data FCCF:基于大数据的全市人群流量预测
Minh X. Hoang, Yu Zheng, Ambuj K. Singh
Predicting the movement of crowds in a city is strategically important for traffic management, risk assessment, and public safety. In this paper, we propose predicting two types of flows of crowds in every region of a city based on big data, including human mobility data, weather conditions, and road network data. To develop a practical solution for citywide traffic prediction, we first partition the map of a city into regions using both its road network and historical records of human mobility. Our problem is different than the predictions of each individual's movements and each road segment's traffic conditions, which are computationally costly and not necessary from the perspective of public safety on a citywide scale. To model the multiple complex factors affecting crowd flows, we decompose flows into three components: seasonal (periodic patterns), trend (changes in periodic patterns), and residual flows (instantaneous changes). The seasonal and trend models are built as intrinsic Gaussian Markov random fields which can cope with noisy and missing data, whereas a residual model exploits the spatio-temporal dependence among different flows and regions, as well as the effect of weather. Experiment results on three real-world datasets show that our method is scalable and outperforms all baselines significantly in terms of accuracy.
预测城市中人群的移动对交通管理、风险评估和公共安全具有重要的战略意义。在本文中,我们提出了基于大数据(包括人类移动数据、天气条件和道路网络数据)预测城市每个区域的两种类型的人群流动。为了开发全市交通预测的实用解决方案,我们首先使用城市的道路网络和人类流动性的历史记录将城市地图划分为区域。我们的问题不同于预测每个人的运动和每个路段的交通状况,这些都是计算成本很高的,从城市范围内的公共安全角度来看,这是不必要的。为了模拟影响人群流动的多种复杂因素,我们将流量分解为三个组成部分:季节性(周期性模式)、趋势(周期性模式的变化)和剩余流量(瞬时变化)。季节和趋势模型是建立在固有的高斯马尔可夫随机场,可以处理噪声和缺失的数据,而残差模型利用了不同流量和区域之间的时空依赖性,以及天气的影响。在三个真实数据集上的实验结果表明,我们的方法具有可扩展性,并且在准确性方面显着优于所有基线。
{"title":"FCCF: forecasting citywide crowd flows based on big data","authors":"Minh X. Hoang, Yu Zheng, Ambuj K. Singh","doi":"10.1145/2996913.2996934","DOIUrl":"https://doi.org/10.1145/2996913.2996934","url":null,"abstract":"Predicting the movement of crowds in a city is strategically important for traffic management, risk assessment, and public safety. In this paper, we propose predicting two types of flows of crowds in every region of a city based on big data, including human mobility data, weather conditions, and road network data. To develop a practical solution for citywide traffic prediction, we first partition the map of a city into regions using both its road network and historical records of human mobility. Our problem is different than the predictions of each individual's movements and each road segment's traffic conditions, which are computationally costly and not necessary from the perspective of public safety on a citywide scale. To model the multiple complex factors affecting crowd flows, we decompose flows into three components: seasonal (periodic patterns), trend (changes in periodic patterns), and residual flows (instantaneous changes). The seasonal and trend models are built as intrinsic Gaussian Markov random fields which can cope with noisy and missing data, whereas a residual model exploits the spatio-temporal dependence among different flows and regions, as well as the effect of weather. Experiment results on three real-world datasets show that our method is scalable and outperforms all baselines significantly in terms of accuracy.","PeriodicalId":20525,"journal":{"name":"Proceedings of the 24th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2016-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78855609","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 142
A framework for evacuation hotspot detection after large scale disasters using location data from smartphones: case study of Kumamoto earthquake 基于智能手机位置数据的大规模灾害后疏散热点检测框架——以熊本地震为例
T. Yabe, K. Tsubouchi, Akihito Sudo, Y. Sekimoto
Large scale disasters cause severe social disorder and trigger mass evacuation activities. Managing the evacuation shelters efficiently is crucial for disaster management. Kumamoto prefecture, Japan, was hit by an enormous (Magnitude 7.3) earthquake on 16th of April, 2016. As a result, more than 10,000 buildings were severely damaged and over 100,000 people had to evacuate from their homes. After the earthquake, it took the decision makers several days to grasp the locations where people were evacuating, which delayed of distribution of supply and rescue. This situation was made even more complex since some people evacuated to places that were not designated as evacuation shelters. Conventional methods for grasping evacuation hotspots require on-foot field surveys that take time and are difficult to execute right after the hazard in the confusion. We propose a novel framework to efficiently estimate the evacuation hotspots after large disasters using location data collected from smartphones. To validate our framework and show the useful analysis using our output, we demonstrated the framework on the Kumamoto earthquake using GPS data of smartphones collected by Yahoo Japan. We verified that our estimation accuracy of evacuation hotspots were very high by checking the located facilities and also by comparing the population transition results with newspaper reports. Additionally, we demonstrated analysis using our framework outputs that would help decision makers, such as the population transition and function period of each hotspot. The efficiency of our framework is also validated by checking the processing time, showing that it could be utilized efficiently in disasters of any scale. Our framework provides useful output for decision makers that manage evacuation shelters after various kinds of large scale disasters.
大规模的灾害造成严重的社会混乱,引发大规模的疏散活动。有效管理疏散避难所对灾害管理至关重要。2016年4月16日,日本熊本县发生7.3级大地震。结果,1万多座建筑物严重受损,10万多人不得不撤离家园。地震发生后,决策者花了好几天时间才掌握人们撤离的地点,这延误了物资的分配和救援。由于一些人被疏散到没有被指定为疏散避难所的地方,情况变得更加复杂。掌握疏散热点的传统方法需要步行实地调查,这需要花费时间,而且很难在混乱的危险发生后立即执行。我们提出了一个新的框架,利用智能手机收集的位置数据有效地估计大型灾害后的疏散热点。为了验证我们的框架并使用我们的输出显示有用的分析,我们使用雅虎日本收集的智能手机的GPS数据演示了熊本地震的框架。我们通过检查所设设施,并将人口转移结果与报纸报道进行比较,验证了我们对疏散热点的估计精度很高。此外,我们还使用我们的框架输出展示了有助于决策者的分析,例如每个热点的人口转变和功能期。通过检查处理时间也验证了我们的框架的效率,表明它可以有效地利用在任何规模的灾难。我们的框架为在各种大规模灾害后管理疏散避难所的决策者提供了有用的输出。
{"title":"A framework for evacuation hotspot detection after large scale disasters using location data from smartphones: case study of Kumamoto earthquake","authors":"T. Yabe, K. Tsubouchi, Akihito Sudo, Y. Sekimoto","doi":"10.1145/2996913.2997014","DOIUrl":"https://doi.org/10.1145/2996913.2997014","url":null,"abstract":"Large scale disasters cause severe social disorder and trigger mass evacuation activities. Managing the evacuation shelters efficiently is crucial for disaster management. Kumamoto prefecture, Japan, was hit by an enormous (Magnitude 7.3) earthquake on 16th of April, 2016. As a result, more than 10,000 buildings were severely damaged and over 100,000 people had to evacuate from their homes. After the earthquake, it took the decision makers several days to grasp the locations where people were evacuating, which delayed of distribution of supply and rescue. This situation was made even more complex since some people evacuated to places that were not designated as evacuation shelters. Conventional methods for grasping evacuation hotspots require on-foot field surveys that take time and are difficult to execute right after the hazard in the confusion. We propose a novel framework to efficiently estimate the evacuation hotspots after large disasters using location data collected from smartphones. To validate our framework and show the useful analysis using our output, we demonstrated the framework on the Kumamoto earthquake using GPS data of smartphones collected by Yahoo Japan. We verified that our estimation accuracy of evacuation hotspots were very high by checking the located facilities and also by comparing the population transition results with newspaper reports. Additionally, we demonstrated analysis using our framework outputs that would help decision makers, such as the population transition and function period of each hotspot. The efficiency of our framework is also validated by checking the processing time, showing that it could be utilized efficiently in disasters of any scale. Our framework provides useful output for decision makers that manage evacuation shelters after various kinds of large scale disasters.","PeriodicalId":20525,"journal":{"name":"Proceedings of the 24th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2016-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90412503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
Bicycle-sharing systems expansion: station re-deployment through crowd planning 共享单车系统扩展:通过人群规划重新部署站点
Jiawei Zhang, Xiao Pan, Moyin Li, Philip S. Yu
Bicycle-sharing systems (BSSs) which provide short-term shared bike usage services for the public are becoming very popular in many large cities. The accelerating bike traveling demands from the public have driven several significant expansions of many BSSs to place additional bikes and stations in their extended service regions. Meanwhile, to capture individuals' traveling needs more precisely, in the expansion, many BSSs have set up online websites to receive station location suggestions from the public. In this paper, we will study the bike station re-deployment problem in the BSSs expansion. Besides the historical bike usage and construction cost information, the crowd suggestions are also incorporated in the problem. The station re-deployment problem is very challenging to solve, and it covers two sub-tasks simultaneously: (1) bike station locations identification, and (2) bike dock assignment (to the deployed stations). To address the problem, a novel bike station re-deployment framework, CrowdPlanning, is introduced in this paper. In both station deployment and capacity assignment tasks, CrowdPlanning fuses different categories of spatial information including the crowd suggestions, individuals' historical bike usage and the construction costs simultaneously. By formulating these two tasks as two optimization problems, the optimal expansion strategies can be identified by CrowdPlanning. for the BSSs. Extensive experiments are conducted on the real-world BSSs and crowd suggestion dataset to demonstrate the effectiveness of framework CrowdPlanning.
自行车共享系统(BSSs)为公众提供短期共享自行车使用服务,在许多大城市变得非常流行。公众日益增长的自行车出行需求,促使许多bss进行了几次重大扩张,在其扩展服务区域内增加了自行车和车站。同时,为了更准确地捕捉个人的出行需求,在扩容过程中,许多网站都建立了在线网站,接收公众的车站位置建议。本文将研究城市公交系统扩建中自行车站的重新部署问题。除了历史自行车使用和建设成本信息外,还纳入了人群建议。车站重新部署问题是一个非常具有挑战性的问题,它同时涉及两个子任务:(1)自行车车站位置识别和(2)自行车停放点分配(到已部署的车站)。为了解决这一问题,本文提出了一种新的自行车站点重新部署框架——CrowdPlanning。在站点部署和容量分配任务中,CrowdPlanning同时融合了不同类别的空间信息,包括人群建议、个人历史自行车使用情况和建设成本。通过将这两个任务表述为两个优化问题,可以通过CrowdPlanning识别出最优的扩展策略。为bss服务。在现实世界的bss和人群建议数据集上进行了大量的实验,以证明框架CrowdPlanning的有效性。
{"title":"Bicycle-sharing systems expansion: station re-deployment through crowd planning","authors":"Jiawei Zhang, Xiao Pan, Moyin Li, Philip S. Yu","doi":"10.1145/2996913.2996926","DOIUrl":"https://doi.org/10.1145/2996913.2996926","url":null,"abstract":"Bicycle-sharing systems (BSSs) which provide short-term shared bike usage services for the public are becoming very popular in many large cities. The accelerating bike traveling demands from the public have driven several significant expansions of many BSSs to place additional bikes and stations in their extended service regions. Meanwhile, to capture individuals' traveling needs more precisely, in the expansion, many BSSs have set up online websites to receive station location suggestions from the public. In this paper, we will study the bike station re-deployment problem in the BSSs expansion. Besides the historical bike usage and construction cost information, the crowd suggestions are also incorporated in the problem. The station re-deployment problem is very challenging to solve, and it covers two sub-tasks simultaneously: (1) bike station locations identification, and (2) bike dock assignment (to the deployed stations). To address the problem, a novel bike station re-deployment framework, CrowdPlanning, is introduced in this paper. In both station deployment and capacity assignment tasks, CrowdPlanning fuses different categories of spatial information including the crowd suggestions, individuals' historical bike usage and the construction costs simultaneously. By formulating these two tasks as two optimization problems, the optimal expansion strategies can be identified by CrowdPlanning. for the BSSs. Extensive experiments are conducted on the real-world BSSs and crowd suggestion dataset to demonstrate the effectiveness of framework CrowdPlanning.","PeriodicalId":20525,"journal":{"name":"Proceedings of the 24th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2016-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83728517","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Location corroborations by mobile devices without traces 移动设备无痕迹的位置证实
Y. Kanza
A location corroboration of a person is a proof, in the form of a digital record, indicating that this person was at a particular place at a given time. That is, given a user u, a location l and a time t, a location corroboration is a certified evidence that u was at location l at time t. Such corroborations can be used in legal procedures, help solving personal disputes or enable services that rely on knowing with certainty the location of a user at a given time. A corroboration without traces means that the user location is not stored in any public server or in any other public entity, to protect the user privacy. In this paper we present the problem of producing a location corroboration without traces, using a mobile device, and we discuss possible solutions to it.
一个人的位置确证是一种证据,以数字记录的形式,表明这个人在特定的时间出现在特定的地点。也就是说,给定用户u、位置l和时间t,位置确证就是证明u在时间t时在位置l的证明证据。这种确证可以用于法律程序,帮助解决个人纠纷,或者使依赖于确定用户在特定时间的位置的服务成为可能。无痕迹确证是指不将用户位置存储在任何公共服务器或任何其他公共实体中,以保护用户隐私。在本文中,我们提出了使用移动设备产生无痕迹位置确证的问题,并讨论了可能的解决方案。
{"title":"Location corroborations by mobile devices without traces","authors":"Y. Kanza","doi":"10.1145/2996913.2997010","DOIUrl":"https://doi.org/10.1145/2996913.2997010","url":null,"abstract":"A location corroboration of a person is a proof, in the form of a digital record, indicating that this person was at a particular place at a given time. That is, given a user u, a location l and a time t, a location corroboration is a certified evidence that u was at location l at time t. Such corroborations can be used in legal procedures, help solving personal disputes or enable services that rely on knowing with certainty the location of a user at a given time. A corroboration without traces means that the user location is not stored in any public server or in any other public entity, to protect the user privacy. In this paper we present the problem of producing a location corroboration without traces, using a mobile device, and we discuss possible solutions to it.","PeriodicalId":20525,"journal":{"name":"Proceedings of the 24th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2016-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72870961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
A spatial column-store to triangulate the Netherlands on the fly. 一个空间列式存储,可以在飞行中对荷兰进行三角测量。
R. Goncalves, T. V. Tilburg, K. Kyzirakos, F. Alvanaki, P. Koutsourakis, B. V. Werkhoven, W. V. Hage
3D digital city models, important for urban planning, are currently constructed from massive point clouds obtained through airborne LiDAR (Light Detection and Ranging). They are semantically enriched with information obtained from auxiliary GIS data like Cadastral data which contains information about the boundaries of properties, road networks, rivers, lakes etc. Technical advances in the LiDAR data acquisition systems made possible the rapid acquisition of high resolution topographical information for an entire country. Such data sets are now reaching the trillion points barrier. To cope with this data deluge and provide up-to-date 3D digital city models on demand current geospatial management strategies should be re-thought. This work presents a column-oriented Spatial Database Management System which provides in-situ data access, effective data skipping, efficient spatial operations, and interactive data visualization. Its efficiency and scalability is demonstrated using a dense LiDAR scan of The Netherlands consisting of 640 billion points and the latest Cadastral information, and compared with PostGIS.
3D数字城市模型对城市规划很重要,目前是通过机载激光雷达(光探测和测距)获得的大量点云构建的。从辅助GIS数据(如地籍数据)中获得的信息在语义上丰富了它们,地籍数据包含有关属性、道路网络、河流、湖泊等边界的信息。激光雷达数据采集系统的技术进步使快速获取整个国家的高分辨率地形信息成为可能。这些数据集现在已经达到了万亿点大关。为了应对这种数据洪流,并按需提供最新的3D数字城市模型,我们应该重新考虑当前的地理空间管理策略。本文提出了一个面向列的空间数据库管理系统,该系统提供了原位数据访问、有效的数据跳转、高效的空间操作和交互式数据可视化。通过对荷兰6400亿个点和最新地籍信息的密集激光雷达扫描,并与PostGIS进行比较,证明了它的效率和可扩展性。
{"title":"A spatial column-store to triangulate the Netherlands on the fly.","authors":"R. Goncalves, T. V. Tilburg, K. Kyzirakos, F. Alvanaki, P. Koutsourakis, B. V. Werkhoven, W. V. Hage","doi":"10.1145/2996913.2997005","DOIUrl":"https://doi.org/10.1145/2996913.2997005","url":null,"abstract":"3D digital city models, important for urban planning, are currently constructed from massive point clouds obtained through airborne LiDAR (Light Detection and Ranging). They are semantically enriched with information obtained from auxiliary GIS data like Cadastral data which contains information about the boundaries of properties, road networks, rivers, lakes etc. Technical advances in the LiDAR data acquisition systems made possible the rapid acquisition of high resolution topographical information for an entire country. Such data sets are now reaching the trillion points barrier. To cope with this data deluge and provide up-to-date 3D digital city models on demand current geospatial management strategies should be re-thought. This work presents a column-oriented Spatial Database Management System which provides in-situ data access, effective data skipping, efficient spatial operations, and interactive data visualization. Its efficiency and scalability is demonstrated using a dense LiDAR scan of The Netherlands consisting of 640 billion points and the latest Cadastral information, and compared with PostGIS.","PeriodicalId":20525,"journal":{"name":"Proceedings of the 24th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2016-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84966063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
User identification in cyber-physical space: a case study on mobile query logs and trajectories 网络物理空间中的用户识别:移动查询日志和轨迹的案例研究
Tianyi Hao, Jingbo Zhou, Yunsheng Cheng, Longbo Huang, Haishan Wu
User identification across domains draws lots of research effort in recent years. Although most of existing works focus on user identification in a single space, in this paper, we first try to identify users by fusing their activities in cyber space and physical space, which helps us obtain a comprehensive understanding about users' online behaviours as well as offline visitation. Out profound insight to tackle this problem is that we can build a connection between the cyber space and the physical space with the stable location distribution of IP addresses. Thus, we propose a novel framework for user identification in cyber-physical space, which consists of three key steps: 1) modeling the location distribution of each IP address; 2) computing the co-occurrence with an inverted index to reduce the space and time cost; and 3) a learning-to-rank tactic to fuse user's features shared in both spaces to improve the accuracy. We conduct experiments to identify individual users from mobile query logs (generated in cyber space) and trajectory data (generated in physical space) to demonstrate the efficiency and effectiveness of our framework.
跨域用户识别是近年来研究的热点。虽然现有的研究大多集中在单个空间的用户识别,但在本文中,我们首先尝试通过融合用户在网络空间和物理空间的活动来识别用户,这有助于我们全面了解用户的在线行为和离线访问。我们解决这个问题的深刻见解是,我们可以通过IP地址的稳定位置分布,在网络空间和物理空间之间建立联系。因此,我们提出了一个新的网络物理空间用户识别框架,该框架包括三个关键步骤:1)建模每个IP地址的位置分布;2)与倒排索引计算共现,降低空间和时间成本;3)采用学习排序策略,融合两个空间共享的用户特征,提高准确率。我们进行了实验,从移动查询日志(在网络空间中生成)和轨迹数据(在物理空间中生成)中识别个人用户,以证明我们框架的效率和有效性。
{"title":"User identification in cyber-physical space: a case study on mobile query logs and trajectories","authors":"Tianyi Hao, Jingbo Zhou, Yunsheng Cheng, Longbo Huang, Haishan Wu","doi":"10.1145/2996913.2997017","DOIUrl":"https://doi.org/10.1145/2996913.2997017","url":null,"abstract":"User identification across domains draws lots of research effort in recent years. Although most of existing works focus on user identification in a single space, in this paper, we first try to identify users by fusing their activities in cyber space and physical space, which helps us obtain a comprehensive understanding about users' online behaviours as well as offline visitation. Out profound insight to tackle this problem is that we can build a connection between the cyber space and the physical space with the stable location distribution of IP addresses. Thus, we propose a novel framework for user identification in cyber-physical space, which consists of three key steps: 1) modeling the location distribution of each IP address; 2) computing the co-occurrence with an inverted index to reduce the space and time cost; and 3) a learning-to-rank tactic to fuse user's features shared in both spaces to improve the accuracy. We conduct experiments to identify individual users from mobile query logs (generated in cyber space) and trajectory data (generated in physical space) to demonstrate the efficiency and effectiveness of our framework.","PeriodicalId":20525,"journal":{"name":"Proceedings of the 24th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2016-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87675824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
A concise summary of spatial anomalies and its application in efficient real-time driving behaviour monitoring 空间异常及其在高效实时驾驶行为监测中的应用综述
Hoang Thanh Lam
This work is motivated by a smart car application which analyses streams of data generated from cars to enhance transportation safety. We treated the problem as real-time abnormal driving behaviour detection using spatio-temporal data collected from mobile devices including GPS location, speed and steering angle. A concise summary was proposed to summarise spatial patterns from GPS trajectory data for efficient real-time anomaly detection. An approach solving this problem by nearest neighbour search has O(n) space and O(log(n) + k) query time complexity, where k is the neighbourhood size and n is the data size. On the other hand, the concise summary approach requires only O(ε * n) memory space and has O(log(ε * n)) query time complexity, where k is several orders of magnitude smaller than one. Experiments with two large datasets from Porto and Beijing showed that our method used only a few megabytes to summarise datasets with n = 80 million data points and was able to process 30K queries per second which was several orders of magnitude faster than the baseline approach. Besides, in the work, interesting spatio-temporal patterns regarding abnormal driving behaviours from the real-world datasets are also discussed to demonstrate potential application of the work in many industries including insurance, transportation safety enhancement and city transport management.
这项工作的动机是一款智能汽车应用程序,它可以分析汽车产生的数据流,以提高交通安全。我们将该问题视为实时异常驾驶行为检测,使用从移动设备收集的时空数据,包括GPS位置、速度和转向角度。提出了一种基于GPS轨迹数据的空间模式总结方法,用于实时有效的异常检测。通过最近邻搜索解决该问题的方法具有O(n)空间和O(log(n) + k)查询时间复杂度,其中k为邻域大小,n为数据大小。另一方面,简洁的摘要方法只需要O(ε * n)内存空间和O(log(ε * n))查询时间复杂度,其中k比1小几个数量级。对来自波尔图和北京的两个大型数据集进行的实验表明,我们的方法仅使用几兆字节来汇总n = 8000万个数据点的数据集,并且能够每秒处理30K个查询,这比基线方法快了几个数量级。此外,本文还讨论了来自真实世界数据集的异常驾驶行为的有趣时空模式,以展示该工作在保险、交通安全增强和城市交通管理等许多行业的潜在应用。
{"title":"A concise summary of spatial anomalies and its application in efficient real-time driving behaviour monitoring","authors":"Hoang Thanh Lam","doi":"10.1145/2996913.2996989","DOIUrl":"https://doi.org/10.1145/2996913.2996989","url":null,"abstract":"This work is motivated by a smart car application which analyses streams of data generated from cars to enhance transportation safety. We treated the problem as real-time abnormal driving behaviour detection using spatio-temporal data collected from mobile devices including GPS location, speed and steering angle. A concise summary was proposed to summarise spatial patterns from GPS trajectory data for efficient real-time anomaly detection. An approach solving this problem by nearest neighbour search has O(n) space and O(log(n) + k) query time complexity, where k is the neighbourhood size and n is the data size. On the other hand, the concise summary approach requires only O(ε * n) memory space and has O(log(ε * n)) query time complexity, where k is several orders of magnitude smaller than one. Experiments with two large datasets from Porto and Beijing showed that our method used only a few megabytes to summarise datasets with n = 80 million data points and was able to process 30K queries per second which was several orders of magnitude faster than the baseline approach. Besides, in the work, interesting spatio-temporal patterns regarding abnormal driving behaviours from the real-world datasets are also discussed to demonstrate potential application of the work in many industries including insurance, transportation safety enhancement and city transport management.","PeriodicalId":20525,"journal":{"name":"Proceedings of the 24th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2016-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85369975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
期刊
Proceedings of the 24th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1