首页 > 最新文献

2013 Fourth International Conference on Computing for Geospatial Research and Application最新文献

英文 中文
Complex GIS Query Based on Rule Engine 基于规则引擎的复杂GIS查询
Liu Chen-fan, Liu Hai-yan
In previous geographic information inquiry, query condition is either fixed in program or providing an SQL inquiry mode for users. The former condition is unalterable while the latter demands users to be equipped with certain SQL query language knowledge. The article introduces how to use a rule engine to make inquiries through simple combination between natural semantic modules with the support of rule base. First, users formulate query plans through simple combination between natural language modules according to their own demands. Then, users deliver the query scheme to the rule engine for reasoning & matching, find the correct matching rule, and execute this rule. Finally, execution results are returned to users.
在以往的地理信息查询中,查询条件要么在程序中固定,要么为用户提供SQL查询方式。前者的条件是不变的,而后者则要求用户具备一定的SQL查询语言知识。本文介绍了如何在规则库的支持下,通过自然语义模块之间的简单组合,使用规则引擎进行查询。首先,用户根据自己的需求,通过自然语言模块之间的简单组合,制定查询计划。然后,用户将查询方案交给规则引擎进行推理匹配,找到正确的匹配规则,并执行该规则。最后,执行结果返回给用户。
{"title":"Complex GIS Query Based on Rule Engine","authors":"Liu Chen-fan, Liu Hai-yan","doi":"10.1109/COMGEO.2013.28","DOIUrl":"https://doi.org/10.1109/COMGEO.2013.28","url":null,"abstract":"In previous geographic information inquiry, query condition is either fixed in program or providing an SQL inquiry mode for users. The former condition is unalterable while the latter demands users to be equipped with certain SQL query language knowledge. The article introduces how to use a rule engine to make inquiries through simple combination between natural semantic modules with the support of rule base. First, users formulate query plans through simple combination between natural language modules according to their own demands. Then, users deliver the query scheme to the rule engine for reasoning & matching, find the correct matching rule, and execute this rule. Finally, execution results are returned to users.","PeriodicalId":383309,"journal":{"name":"2013 Fourth International Conference on Computing for Geospatial Research and Application","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116935429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Superpixel Clustering and Planar Fit Segmentation of 3D LIDAR Point Clouds 三维激光雷达点云的超像素聚类与平面拟合分割
H. Mahmoudabadi, Timothy Shoaf, M. Olsen
Terrestrial laser scanning (TLS, also called ground based Light Detection and Ranging, LIDAR) is an effective data acquisition method capable of high precision, detailed 3D models for surveying natural environments. However, despite the high density, and quality, of the data itself, the data acquired contains no direct intelligence necessary for further modeling and analysis - merely the 3D geometry (XYZ), 3-component color (RGB), and laser return signal strength (I) for each point. One common task for LIDAR data processing is the selection of an appropriate methodology for the extraction of geometric features from the irregularly distributed point clouds. Such recognition schemes must accomplish both segmentation and classification. Planar (or other geometrically primitive) feature extraction is a common method for point cloud segmentation, however, current algorithms are computationally expensive and often do not utilize color or intensity information. In this paper we present an efficient algorithm, that takes advantage of both colorimetric and geometric data as input and consists of three principal steps to accomplish a more flexible form of feature extraction. First, we employ a Simple Linear Iterative Clustering (SLIC) super pixel algorithm for clustering and dividing the colorimetric data. Second, we use a plane-fitting technique on each significantly smaller cluster to produce a set of normal vectors corresponding to each super pixel. Last, we utilize a Least Squares Multi-class Support Vector Machine (LSMSVM) to classify each cluster as either "ground", "wall", or "natural feature". Despite the challenging problems presented by the occlusion of features during data acquisition, our method effectively generates accurate (>85%) segmentation results by utilizing the color space information, in addition to the standard geometry, during segmentation.
地面激光扫描(TLS,也称为地面光探测和测距,LIDAR)是一种有效的数据采集方法,能够为测量自然环境提供高精度,详细的3D模型。然而,尽管数据本身具有高密度和高质量,但所获取的数据不包含进一步建模和分析所需的直接智能——仅包含每个点的3D几何形状(XYZ)、3分量颜色(RGB)和激光返回信号强度(I)。激光雷达数据处理的一个常见任务是选择合适的方法从不规则分布的点云中提取几何特征。这种识别方案必须同时实现分割和分类。平面(或其他几何原始)特征提取是点云分割的常用方法,然而,当前的算法计算成本高,并且通常不利用颜色或强度信息。在本文中,我们提出了一种高效的算法,该算法利用比色和几何数据作为输入,由三个主要步骤组成,以实现更灵活的特征提取形式。首先,我们采用简单线性迭代聚类(SLIC)超像素算法对比色数据进行聚类和划分。其次,我们在每个显著较小的簇上使用平面拟合技术来生成一组对应于每个超级像素的法向量。最后,我们利用最小二乘多类支持向量机(LSMSVM)将每个聚类分类为“地面”,“墙壁”或“自然特征”。尽管在数据采集过程中存在特征遮挡带来的挑战性问题,但我们的方法在分割过程中除了利用标准几何外,还利用颜色空间信息有效地生成了准确的分割结果(>85%)。
{"title":"Superpixel Clustering and Planar Fit Segmentation of 3D LIDAR Point Clouds","authors":"H. Mahmoudabadi, Timothy Shoaf, M. Olsen","doi":"10.1109/COMGEO.2013.2","DOIUrl":"https://doi.org/10.1109/COMGEO.2013.2","url":null,"abstract":"Terrestrial laser scanning (TLS, also called ground based Light Detection and Ranging, LIDAR) is an effective data acquisition method capable of high precision, detailed 3D models for surveying natural environments. However, despite the high density, and quality, of the data itself, the data acquired contains no direct intelligence necessary for further modeling and analysis - merely the 3D geometry (XYZ), 3-component color (RGB), and laser return signal strength (I) for each point. One common task for LIDAR data processing is the selection of an appropriate methodology for the extraction of geometric features from the irregularly distributed point clouds. Such recognition schemes must accomplish both segmentation and classification. Planar (or other geometrically primitive) feature extraction is a common method for point cloud segmentation, however, current algorithms are computationally expensive and often do not utilize color or intensity information. In this paper we present an efficient algorithm, that takes advantage of both colorimetric and geometric data as input and consists of three principal steps to accomplish a more flexible form of feature extraction. First, we employ a Simple Linear Iterative Clustering (SLIC) super pixel algorithm for clustering and dividing the colorimetric data. Second, we use a plane-fitting technique on each significantly smaller cluster to produce a set of normal vectors corresponding to each super pixel. Last, we utilize a Least Squares Multi-class Support Vector Machine (LSMSVM) to classify each cluster as either \"ground\", \"wall\", or \"natural feature\". Despite the challenging problems presented by the occlusion of features during data acquisition, our method effectively generates accurate (>85%) segmentation results by utilizing the color space information, in addition to the standard geometry, during segmentation.","PeriodicalId":383309,"journal":{"name":"2013 Fourth International Conference on Computing for Geospatial Research and Application","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121461197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
DEM Generation with SAR Interferometry Based on Weighted Wavelet Phase Unwrapping 基于加权小波相位解包裹的SAR干涉DEM生成
M. Rahnemoonfar, Beth Plale
Synthetic aperture radar Interferometry (InSAR) is a significant 3D imaging technique to generate a Digital Elevation Model (DEM). The phase difference between the complex SAR images displays an interference fringe pattern from which the elevation of any point in the imaged terrain can be determined. Phase unwrapping is the most critical step in the signal processing of InSAR and especially in DEM generation. In this paper, a least squares weighted wavelet technique is used which overcomes the problem of slow convergence and the less-accurate Gauss-Seidel method. Here, by decomposing a grid to low-frequency and high-frequency components, the problem for a low-frequency component is solved. The technique is applied to ENVISAT ASAR images of Bam area. The experimental results compared with the Statistical-Cost Network Flow approach and the DEM generated from a 1/25000 scale map of the area shows the effectiveness of the proposed method.
合成孔径雷达干涉测量(InSAR)是生成数字高程模型(DEM)的一种重要的三维成像技术。复杂SAR图像之间的相位差显示出干涉条纹图,从中可以确定成像地形中任何点的高程。相位展开是InSAR信号处理尤其是DEM生成的关键步骤。本文提出了一种最小二乘加权小波技术,克服了高斯-塞德尔方法收敛速度慢和精度不高的问题。在这里,通过将网格分解为低频和高频分量,解决了低频分量的问题。将该技术应用于巴姆地区的ENVISAT ASAR图像。实验结果与统计成本网络流方法和1/25000比例尺区域地图生成的DEM进行了比较,表明了该方法的有效性。
{"title":"DEM Generation with SAR Interferometry Based on Weighted Wavelet Phase Unwrapping","authors":"M. Rahnemoonfar, Beth Plale","doi":"10.1109/COMGEO.2013.14","DOIUrl":"https://doi.org/10.1109/COMGEO.2013.14","url":null,"abstract":"Synthetic aperture radar Interferometry (InSAR) is a significant 3D imaging technique to generate a Digital Elevation Model (DEM). The phase difference between the complex SAR images displays an interference fringe pattern from which the elevation of any point in the imaged terrain can be determined. Phase unwrapping is the most critical step in the signal processing of InSAR and especially in DEM generation. In this paper, a least squares weighted wavelet technique is used which overcomes the problem of slow convergence and the less-accurate Gauss-Seidel method. Here, by decomposing a grid to low-frequency and high-frequency components, the problem for a low-frequency component is solved. The technique is applied to ENVISAT ASAR images of Bam area. The experimental results compared with the Statistical-Cost Network Flow approach and the DEM generated from a 1/25000 scale map of the area shows the effectiveness of the proposed method.","PeriodicalId":383309,"journal":{"name":"2013 Fourth International Conference on Computing for Geospatial Research and Application","volume":"19 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132175153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Analysis of Spatial Autocorrelation for Traffic Accident Data Based on Spatial Decision Tree 基于空间决策树的交通事故数据空间自相关分析
Bimal Ghimire, Shrutilipi Bhattacharjee, S. Ghosh
With rapid increase of scope, coverage and volume of geographic datasets, knowledge discovery from spatial data have drawn a lot of research interest for last few decades. Traditional analytical techniques cannot easily discover new, implicit patterns, and relationships that are hidden into geographic datasets. The principle of this work is to evaluate the performance of traditional and spatial data mining techniques for analysing spatial certainty, such as spatial autocorrelation. Analysis is done by classification technique, i.e. a Decision Tree (DT) based approach on a spatial diversity coefficient. ID3 (Iterative Dichotomiser 3) algorithm is used for building the conventional and spatial decision trees. A synthetically generated spatial accident dataset and real accident dataset are used for this purpose. The spatial DT (SDT) is found to be more significant in spatial decision making.
近几十年来,随着地理数据集的范围、覆盖范围和数量的迅速增加,空间数据的知识发现引起了人们的广泛关注。传统的分析技术不能很容易地发现隐藏在地理数据集中的新的、隐含的模式和关系。这项工作的原理是评估传统和空间数据挖掘技术在分析空间确定性(如空间自相关)方面的性能。分析采用分类技术,即基于空间多样性系数的决策树(DT)方法。使用ID3(迭代二分法3)算法构建常规决策树和空间决策树。本文采用综合生成的空间事故数据集和真实事故数据集。空间DT (SDT)在空间决策中更为显著。
{"title":"Analysis of Spatial Autocorrelation for Traffic Accident Data Based on Spatial Decision Tree","authors":"Bimal Ghimire, Shrutilipi Bhattacharjee, S. Ghosh","doi":"10.1109/COMGEO.2013.19","DOIUrl":"https://doi.org/10.1109/COMGEO.2013.19","url":null,"abstract":"With rapid increase of scope, coverage and volume of geographic datasets, knowledge discovery from spatial data have drawn a lot of research interest for last few decades. Traditional analytical techniques cannot easily discover new, implicit patterns, and relationships that are hidden into geographic datasets. The principle of this work is to evaluate the performance of traditional and spatial data mining techniques for analysing spatial certainty, such as spatial autocorrelation. Analysis is done by classification technique, i.e. a Decision Tree (DT) based approach on a spatial diversity coefficient. ID3 (Iterative Dichotomiser 3) algorithm is used for building the conventional and spatial decision trees. A synthetically generated spatial accident dataset and real accident dataset are used for this purpose. The spatial DT (SDT) is found to be more significant in spatial decision making.","PeriodicalId":383309,"journal":{"name":"2013 Fourth International Conference on Computing for Geospatial Research and Application","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115778820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Storm System Database: A Big Data Approach to Moving Object Databases 风暴系统数据库:移动对象数据库的大数据方法
Brian Olsen, Mark McKenney
Rainfall data is often collected by measuring the amount of precipitation collected in a physical container at a site. Such methods provide precise data for those sites, but are limited in granularity to the number and placement of collection devices. We use radar images of storm systems that are publicly available and provide rainfall estimates for large regions of the globe, but at the cost of loss of precision. We present a moving object database called Storm DB that stores decibel measurements of rain clouds as moving regions, i.e., we store a single rain cloud as a region that changes shape and position over time. Storm DB is a prototype system that answers rain amount queries over a user defined time duration for any point in the continental United States. In other words, a user can ask the database for the amount of rainfall that fell at any point in the US over a specified time window. Although this single query seems straightforward, it is complicated due to the expected size of the dataset: storm clouds are numerous, radar images are available in high resolution, and our system will collect data over a large timeframe, thus, we expect the number and size of moving regions representing storm clouds to be large. To implement our proposed query, we bring together the following concepts: (i) image processing to retrieve storm clouds from radar images, (ii) interpolation mechanisms to construct moving regions with infinite temporal resolution from region snapshots, (iii) transformations to compute exact point in moving polygon queries using 2-dimensional rather than 3-dimensional algorithms, (iv) GPU algorithms for massively parallel computation of the duration that a point lies inside a moving polygon, and (v) map/reduce algorithms to provide scalability. The resulting prototype lays the groundwork for building big data solutions for moving object databases.
降雨数据通常是通过测量在一个地点的物理容器中收集的降雨量来收集的。这种方法为这些地点提供了精确的数据,但在粒度上受限于收集设备的数量和位置。我们使用公开的风暴系统的雷达图像,并提供全球大部分地区的降雨量估计,但代价是失去精度。我们提出了一个名为Storm DB的移动对象数据库,它将雨云的分贝测量值存储为移动区域,也就是说,我们将单个雨云存储为一个随时间变化形状和位置的区域。Storm DB是一个原型系统,可以在用户定义的时间段内回答美国大陆任何一点的降雨量查询。换句话说,用户可以向数据库查询指定时间窗口内美国任何地点的降雨量。虽然这个单一的查询看起来很简单,但由于数据集的预期大小,它很复杂:风暴云很多,雷达图像是高分辨率的,我们的系统将在很长的时间范围内收集数据,因此,我们预计代表风暴云的移动区域的数量和大小都很大。为了实现我们提出的查询,我们将以下概念结合在一起:(i)从雷达图像中检索风暴云的图像处理,(ii)从区域快照中构建具有无限时间分辨率的移动区域的插值机制,(iii)使用二维而不是三维算法在移动多边形查询中计算精确点的转换,(iv)用于大规模并行计算点位于移动多边形内持续时间的GPU算法,以及(v) map/reduce算法提供可扩展性。由此产生的原型为构建移动对象数据库的大数据解决方案奠定了基础。
{"title":"Storm System Database: A Big Data Approach to Moving Object Databases","authors":"Brian Olsen, Mark McKenney","doi":"10.1109/COMGEO.2013.30","DOIUrl":"https://doi.org/10.1109/COMGEO.2013.30","url":null,"abstract":"Rainfall data is often collected by measuring the amount of precipitation collected in a physical container at a site. Such methods provide precise data for those sites, but are limited in granularity to the number and placement of collection devices. We use radar images of storm systems that are publicly available and provide rainfall estimates for large regions of the globe, but at the cost of loss of precision. We present a moving object database called Storm DB that stores decibel measurements of rain clouds as moving regions, i.e., we store a single rain cloud as a region that changes shape and position over time. Storm DB is a prototype system that answers rain amount queries over a user defined time duration for any point in the continental United States. In other words, a user can ask the database for the amount of rainfall that fell at any point in the US over a specified time window. Although this single query seems straightforward, it is complicated due to the expected size of the dataset: storm clouds are numerous, radar images are available in high resolution, and our system will collect data over a large timeframe, thus, we expect the number and size of moving regions representing storm clouds to be large. To implement our proposed query, we bring together the following concepts: (i) image processing to retrieve storm clouds from radar images, (ii) interpolation mechanisms to construct moving regions with infinite temporal resolution from region snapshots, (iii) transformations to compute exact point in moving polygon queries using 2-dimensional rather than 3-dimensional algorithms, (iv) GPU algorithms for massively parallel computation of the duration that a point lies inside a moving polygon, and (v) map/reduce algorithms to provide scalability. The resulting prototype lays the groundwork for building big data solutions for moving object databases.","PeriodicalId":383309,"journal":{"name":"2013 Fourth International Conference on Computing for Geospatial Research and Application","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116788786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Road Segmentation in Aerial Images by Exploiting Road Vector Data 利用道路矢量数据对航拍图像进行道路分割
Jiangye Yuan, A. Cheriyadat
Segmenting road regions from high resolution aerial images is an important yet challenging task due to large variations on road surfaces. This paper presents a simple and effective method that accurately segments road regions with a weak supervision provided by road vector data, which are publicly available. The method is based on the observation that in aerial images road edges tend to have more visible boundaries parallel to road vectors. A factorization-based segmentation algorithm is applied to an image, which accurately localize boundaries for both texture and nontexture regions. We analyze the spatial distribution of boundary pixels with respect to the road vector, and identify the road edge that separates roads from adjacent areas based on the distribution peaks. The proposed method achieves on average 90% recall and 79% precision on large aerial images covering various types of roads.
从高分辨率航空图像中分割道路区域是一项重要但具有挑战性的任务,因为路面变化很大。本文提出了一种简单有效的方法,利用道路矢量数据提供的弱监督,准确地分割道路区域。该方法是基于观察到在航拍图像中,道路边缘往往有更多的平行于道路矢量的可见边界。将一种基于分解的图像分割算法应用到图像中,对纹理区域和非纹理区域进行了精确的边界定位。我们分析了边界像素相对于道路矢量的空间分布,并根据分布峰值识别将道路与相邻区域分开的道路边缘。该方法在覆盖多种道路类型的大型航拍图像上,平均查全率达到90%,查准率达到79%。
{"title":"Road Segmentation in Aerial Images by Exploiting Road Vector Data","authors":"Jiangye Yuan, A. Cheriyadat","doi":"10.1109/COMGEO.2013.4","DOIUrl":"https://doi.org/10.1109/COMGEO.2013.4","url":null,"abstract":"Segmenting road regions from high resolution aerial images is an important yet challenging task due to large variations on road surfaces. This paper presents a simple and effective method that accurately segments road regions with a weak supervision provided by road vector data, which are publicly available. The method is based on the observation that in aerial images road edges tend to have more visible boundaries parallel to road vectors. A factorization-based segmentation algorithm is applied to an image, which accurately localize boundaries for both texture and nontexture regions. We analyze the spatial distribution of boundary pixels with respect to the road vector, and identify the road edge that separates roads from adjacent areas based on the distribution peaks. The proposed method achieves on average 90% recall and 79% precision on large aerial images covering various types of roads.","PeriodicalId":383309,"journal":{"name":"2013 Fourth International Conference on Computing for Geospatial Research and Application","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129143932","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
The Personality of Venues: Places and the Five-Factors ('Big Five') Model of Personality 场所的个性:场所与个性的五因素(“大五”)模型
Vlad Tanasescu, Christopher B. Jones, Gualtiero Colombo, M. Chorley, S. M. Allen, R. Whitaker
Venues are often described by their type and characteristics, while their level of appreciation by users is indicated through a score (star rating). However the judgement on a particular venue by an individual may more influenced by the individual's experience and personality. In psychology, the five-factor model of personality, or 'Big Five' model, describes an individual's personality in terms of openness, conscientiousness, extraversion, agreeableness and neuroticism. This work explores the notion of 'personality of a venue' by reference to personality traits research in psychology. To determine the personality of a venue, keywords are extracted from reviews of venues, and matched to terms indicative of personality traits dimensions. The work is completed with a human experiment where participants qualify venues according to a set of personality descriptors. Correlations are found between the human annotators and the automated extraction approach.
场所通常以其类型和特征来描述,而用户的欣赏程度则通过评分(星级)来表示。然而,个人对特定场所的判断可能更多地受到个人经验和个性的影响。在心理学中,人格的五因素模型,或“大五”模型,从开放性、严谨性、外向性、宜人性和神经质等方面描述了一个人的性格。这项工作通过参考心理学中的人格特征研究,探索了“场所人格”的概念。为了确定场地的个性,从场地的评论中提取关键字,并将其与指示个性特征维度的术语相匹配。这项工作是通过一个人体实验来完成的,参与者根据一组性格描述符来确定地点。发现了人工注释器和自动提取方法之间的相关性。
{"title":"The Personality of Venues: Places and the Five-Factors ('Big Five') Model of Personality","authors":"Vlad Tanasescu, Christopher B. Jones, Gualtiero Colombo, M. Chorley, S. M. Allen, R. Whitaker","doi":"10.1109/COMGEO.2013.12","DOIUrl":"https://doi.org/10.1109/COMGEO.2013.12","url":null,"abstract":"Venues are often described by their type and characteristics, while their level of appreciation by users is indicated through a score (star rating). However the judgement on a particular venue by an individual may more influenced by the individual's experience and personality. In psychology, the five-factor model of personality, or 'Big Five' model, describes an individual's personality in terms of openness, conscientiousness, extraversion, agreeableness and neuroticism. This work explores the notion of 'personality of a venue' by reference to personality traits research in psychology. To determine the personality of a venue, keywords are extracted from reviews of venues, and matched to terms indicative of personality traits dimensions. The work is completed with a human experiment where participants qualify venues according to a set of personality descriptors. Correlations are found between the human annotators and the automated extraction approach.","PeriodicalId":383309,"journal":{"name":"2013 Fourth International Conference on Computing for Geospatial Research and Application","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126069338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Asking Spatial Questions to Identify GIS Functionality 提出空间问题以识别GIS功能
Song Gao, Michael F. Goochild
Current desktop-GIS software cannot answer users' spatial questions directly. The GIS functionality is hard to identify and use without specific training of GIS skills because of the complex hierarchical organization and the gap between users' spatial thinking and systems' implement descriptions. In order to bridge this gap, we propose a semantic framework for designing a question-based user interface that integrates different levels of ontologies (spatial concept ontology, domain ontology and task ontology) to guide the process of extracting the core spatial concepts and translating them into a set of equivalent computational or operational GIS tasks. We also list some typical spatial questions that might be posed for spatial analysis and computation. The principle introduced in this paper could be applied not only to desktop-GIS software but also to web map services. The semantic framework would be useful to enhance the ability of spatial reasoning in web search engines (e.g. Google semantic search) and answering questions in location-based services as well (e.g. iPhone Siri assistant).
目前的桌面gis软件还不能直接回答用户的空间问题。由于复杂的层次结构和用户空间思维与系统实现描述之间的差距,GIS功能很难识别和使用,如果没有专门的GIS技能培训。为了弥合这一差距,我们提出了一个语义框架,用于设计一个基于问题的用户界面,该界面集成了不同层次的本体(空间概念本体、领域本体和任务本体),以指导提取核心空间概念并将其转化为一组等效的计算或操作GIS任务的过程。我们还列出了一些典型的空间问题,可能会提出空间分析和计算。本文所介绍的原理不仅适用于桌面gis软件,也适用于网络地图服务。语义框架将有助于增强网络搜索引擎(如谷歌语义搜索)的空间推理能力,以及在基于位置的服务(如iPhone Siri助手)中回答问题。
{"title":"Asking Spatial Questions to Identify GIS Functionality","authors":"Song Gao, Michael F. Goochild","doi":"10.1109/COMGEO.2013.18","DOIUrl":"https://doi.org/10.1109/COMGEO.2013.18","url":null,"abstract":"Current desktop-GIS software cannot answer users' spatial questions directly. The GIS functionality is hard to identify and use without specific training of GIS skills because of the complex hierarchical organization and the gap between users' spatial thinking and systems' implement descriptions. In order to bridge this gap, we propose a semantic framework for designing a question-based user interface that integrates different levels of ontologies (spatial concept ontology, domain ontology and task ontology) to guide the process of extracting the core spatial concepts and translating them into a set of equivalent computational or operational GIS tasks. We also list some typical spatial questions that might be posed for spatial analysis and computation. The principle introduced in this paper could be applied not only to desktop-GIS software but also to web map services. The semantic framework would be useful to enhance the ability of spatial reasoning in web search engines (e.g. Google semantic search) and answering questions in location-based services as well (e.g. iPhone Siri assistant).","PeriodicalId":383309,"journal":{"name":"2013 Fourth International Conference on Computing for Geospatial Research and Application","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134537773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Similarity-Based Compression of GPS Trajectory Data 基于相似度的GPS轨迹数据压缩
Jeremy Birnbaum, H. Meng, Jeong-Hyon Hwang, C. Lawson
The recent increase in the use of GPS-enabled devices has introduced a new demand for efficiently storing trajectory data. In this paper, we present a new technique that has a higher compression ratio for trajectory data than existing solutions. This technique splits trajectories into sub-trajectories according to the similarities among them. For each collection of similar sub-trajectories, our technique stores only one sub-trajectory's spatial data. Each sub-trajectory is then expressed as a mapping between itself and a previous sub-trajectory. In general, these mappings can be highly compressed due to a strong correlation between the time values of trajectories. This paper presents evaluation results that show the superiority of our technique over previous solutions.
最近gps设备的使用增加,对有效存储轨迹数据提出了新的需求。在本文中,我们提出了一种新的技术,它比现有的解决方案对轨迹数据具有更高的压缩比。该技术根据轨迹之间的相似性将轨迹划分为子轨迹。对于每个相似子轨迹的集合,我们的技术只存储一个子轨迹的空间数据。然后将每个子轨迹表示为自身与前一个子轨迹之间的映射。一般来说,由于轨迹的时间值之间有很强的相关性,这些映射可以被高度压缩。本文提出的评估结果表明,我们的技术优于以往的解决方案。
{"title":"Similarity-Based Compression of GPS Trajectory Data","authors":"Jeremy Birnbaum, H. Meng, Jeong-Hyon Hwang, C. Lawson","doi":"10.1109/COMGEO.2013.15","DOIUrl":"https://doi.org/10.1109/COMGEO.2013.15","url":null,"abstract":"The recent increase in the use of GPS-enabled devices has introduced a new demand for efficiently storing trajectory data. In this paper, we present a new technique that has a higher compression ratio for trajectory data than existing solutions. This technique splits trajectories into sub-trajectories according to the similarities among them. For each collection of similar sub-trajectories, our technique stores only one sub-trajectory's spatial data. Each sub-trajectory is then expressed as a mapping between itself and a previous sub-trajectory. In general, these mappings can be highly compressed due to a strong correlation between the time values of trajectories. This paper presents evaluation results that show the superiority of our technique over previous solutions.","PeriodicalId":383309,"journal":{"name":"2013 Fourth International Conference on Computing for Geospatial Research and Application","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126781083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
A Framework for Efficient and Convenient Evaluation of Trajectory Compression Algorithms 一种高效方便的轨迹压缩算法评价框架
Jonathan Muckell, Paul W. Olsen, Jeong-Hyon Hwang, S. Ravi, C. Lawson
Trajectory compression algorithms eliminate redundant information in the history of a moving object. Such compression enables efficient transmission, storage, and processing of trajectory data. Although a number of compression algorithms have been proposed in the literature, no common benchmarking platform for evaluating their effectiveness exists. This paper presents a benchmarking framework for efficiently, conveniently, and accurately comparing trajectory compression algorithms. This framework supports various compression algorithms and metrics defined in the literature, as well as three synthetic trajectory generators that have different trade-offs. It also has a highly extensible architecture that facilitates the incorporation of new compression algorithms, evaluation metrics, and trajectory data generators. This paper provides a comprehensive overview of trajectory compression algorithms, evaluation metrics and data generators in conjunction with detailed discussions on their unique benefits and relevant application scenarios. Furthermore, this paper describes challenges that arise in the design and implementation of the above framework and our approaches to tackling these challenges. Finally, this paper presents evaluation results that demonstrate the utility of the benchmarking framework.
轨迹压缩算法消除了运动目标历史中的冗余信息。这样的压缩能够有效地传输、存储和处理轨迹数据。虽然文献中提出了许多压缩算法,但没有一个通用的基准平台来评估它们的有效性。本文提出了一种有效、方便、准确地比较轨迹压缩算法的基准框架。该框架支持文献中定义的各种压缩算法和度量,以及三个具有不同权衡的合成轨迹生成器。它还具有高度可扩展的体系结构,便于合并新的压缩算法、评估指标和轨迹数据生成器。本文全面概述了轨迹压缩算法、评估指标和数据生成器,并详细讨论了它们的独特优势和相关应用场景。此外,本文还描述了在上述框架的设计和实现中出现的挑战以及我们应对这些挑战的方法。最后,本文给出了评估结果,证明了基准框架的实用性。
{"title":"A Framework for Efficient and Convenient Evaluation of Trajectory Compression Algorithms","authors":"Jonathan Muckell, Paul W. Olsen, Jeong-Hyon Hwang, S. Ravi, C. Lawson","doi":"10.1109/COMGEO.2013.5","DOIUrl":"https://doi.org/10.1109/COMGEO.2013.5","url":null,"abstract":"Trajectory compression algorithms eliminate redundant information in the history of a moving object. Such compression enables efficient transmission, storage, and processing of trajectory data. Although a number of compression algorithms have been proposed in the literature, no common benchmarking platform for evaluating their effectiveness exists. This paper presents a benchmarking framework for efficiently, conveniently, and accurately comparing trajectory compression algorithms. This framework supports various compression algorithms and metrics defined in the literature, as well as three synthetic trajectory generators that have different trade-offs. It also has a highly extensible architecture that facilitates the incorporation of new compression algorithms, evaluation metrics, and trajectory data generators. This paper provides a comprehensive overview of trajectory compression algorithms, evaluation metrics and data generators in conjunction with detailed discussions on their unique benefits and relevant application scenarios. Furthermore, this paper describes challenges that arise in the design and implementation of the above framework and our approaches to tackling these challenges. Finally, this paper presents evaluation results that demonstrate the utility of the benchmarking framework.","PeriodicalId":383309,"journal":{"name":"2013 Fourth International Conference on Computing for Geospatial Research and Application","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126520831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
期刊
2013 Fourth International Conference on Computing for Geospatial Research and Application
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1