In previous geographic information inquiry, query condition is either fixed in program or providing an SQL inquiry mode for users. The former condition is unalterable while the latter demands users to be equipped with certain SQL query language knowledge. The article introduces how to use a rule engine to make inquiries through simple combination between natural semantic modules with the support of rule base. First, users formulate query plans through simple combination between natural language modules according to their own demands. Then, users deliver the query scheme to the rule engine for reasoning & matching, find the correct matching rule, and execute this rule. Finally, execution results are returned to users.
{"title":"Complex GIS Query Based on Rule Engine","authors":"Liu Chen-fan, Liu Hai-yan","doi":"10.1109/COMGEO.2013.28","DOIUrl":"https://doi.org/10.1109/COMGEO.2013.28","url":null,"abstract":"In previous geographic information inquiry, query condition is either fixed in program or providing an SQL inquiry mode for users. The former condition is unalterable while the latter demands users to be equipped with certain SQL query language knowledge. The article introduces how to use a rule engine to make inquiries through simple combination between natural semantic modules with the support of rule base. First, users formulate query plans through simple combination between natural language modules according to their own demands. Then, users deliver the query scheme to the rule engine for reasoning & matching, find the correct matching rule, and execute this rule. Finally, execution results are returned to users.","PeriodicalId":383309,"journal":{"name":"2013 Fourth International Conference on Computing for Geospatial Research and Application","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116935429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Terrestrial laser scanning (TLS, also called ground based Light Detection and Ranging, LIDAR) is an effective data acquisition method capable of high precision, detailed 3D models for surveying natural environments. However, despite the high density, and quality, of the data itself, the data acquired contains no direct intelligence necessary for further modeling and analysis - merely the 3D geometry (XYZ), 3-component color (RGB), and laser return signal strength (I) for each point. One common task for LIDAR data processing is the selection of an appropriate methodology for the extraction of geometric features from the irregularly distributed point clouds. Such recognition schemes must accomplish both segmentation and classification. Planar (or other geometrically primitive) feature extraction is a common method for point cloud segmentation, however, current algorithms are computationally expensive and often do not utilize color or intensity information. In this paper we present an efficient algorithm, that takes advantage of both colorimetric and geometric data as input and consists of three principal steps to accomplish a more flexible form of feature extraction. First, we employ a Simple Linear Iterative Clustering (SLIC) super pixel algorithm for clustering and dividing the colorimetric data. Second, we use a plane-fitting technique on each significantly smaller cluster to produce a set of normal vectors corresponding to each super pixel. Last, we utilize a Least Squares Multi-class Support Vector Machine (LSMSVM) to classify each cluster as either "ground", "wall", or "natural feature". Despite the challenging problems presented by the occlusion of features during data acquisition, our method effectively generates accurate (>85%) segmentation results by utilizing the color space information, in addition to the standard geometry, during segmentation.
{"title":"Superpixel Clustering and Planar Fit Segmentation of 3D LIDAR Point Clouds","authors":"H. Mahmoudabadi, Timothy Shoaf, M. Olsen","doi":"10.1109/COMGEO.2013.2","DOIUrl":"https://doi.org/10.1109/COMGEO.2013.2","url":null,"abstract":"Terrestrial laser scanning (TLS, also called ground based Light Detection and Ranging, LIDAR) is an effective data acquisition method capable of high precision, detailed 3D models for surveying natural environments. However, despite the high density, and quality, of the data itself, the data acquired contains no direct intelligence necessary for further modeling and analysis - merely the 3D geometry (XYZ), 3-component color (RGB), and laser return signal strength (I) for each point. One common task for LIDAR data processing is the selection of an appropriate methodology for the extraction of geometric features from the irregularly distributed point clouds. Such recognition schemes must accomplish both segmentation and classification. Planar (or other geometrically primitive) feature extraction is a common method for point cloud segmentation, however, current algorithms are computationally expensive and often do not utilize color or intensity information. In this paper we present an efficient algorithm, that takes advantage of both colorimetric and geometric data as input and consists of three principal steps to accomplish a more flexible form of feature extraction. First, we employ a Simple Linear Iterative Clustering (SLIC) super pixel algorithm for clustering and dividing the colorimetric data. Second, we use a plane-fitting technique on each significantly smaller cluster to produce a set of normal vectors corresponding to each super pixel. Last, we utilize a Least Squares Multi-class Support Vector Machine (LSMSVM) to classify each cluster as either \"ground\", \"wall\", or \"natural feature\". Despite the challenging problems presented by the occlusion of features during data acquisition, our method effectively generates accurate (>85%) segmentation results by utilizing the color space information, in addition to the standard geometry, during segmentation.","PeriodicalId":383309,"journal":{"name":"2013 Fourth International Conference on Computing for Geospatial Research and Application","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121461197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Synthetic aperture radar Interferometry (InSAR) is a significant 3D imaging technique to generate a Digital Elevation Model (DEM). The phase difference between the complex SAR images displays an interference fringe pattern from which the elevation of any point in the imaged terrain can be determined. Phase unwrapping is the most critical step in the signal processing of InSAR and especially in DEM generation. In this paper, a least squares weighted wavelet technique is used which overcomes the problem of slow convergence and the less-accurate Gauss-Seidel method. Here, by decomposing a grid to low-frequency and high-frequency components, the problem for a low-frequency component is solved. The technique is applied to ENVISAT ASAR images of Bam area. The experimental results compared with the Statistical-Cost Network Flow approach and the DEM generated from a 1/25000 scale map of the area shows the effectiveness of the proposed method.
{"title":"DEM Generation with SAR Interferometry Based on Weighted Wavelet Phase Unwrapping","authors":"M. Rahnemoonfar, Beth Plale","doi":"10.1109/COMGEO.2013.14","DOIUrl":"https://doi.org/10.1109/COMGEO.2013.14","url":null,"abstract":"Synthetic aperture radar Interferometry (InSAR) is a significant 3D imaging technique to generate a Digital Elevation Model (DEM). The phase difference between the complex SAR images displays an interference fringe pattern from which the elevation of any point in the imaged terrain can be determined. Phase unwrapping is the most critical step in the signal processing of InSAR and especially in DEM generation. In this paper, a least squares weighted wavelet technique is used which overcomes the problem of slow convergence and the less-accurate Gauss-Seidel method. Here, by decomposing a grid to low-frequency and high-frequency components, the problem for a low-frequency component is solved. The technique is applied to ENVISAT ASAR images of Bam area. The experimental results compared with the Statistical-Cost Network Flow approach and the DEM generated from a 1/25000 scale map of the area shows the effectiveness of the proposed method.","PeriodicalId":383309,"journal":{"name":"2013 Fourth International Conference on Computing for Geospatial Research and Application","volume":"19 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132175153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With rapid increase of scope, coverage and volume of geographic datasets, knowledge discovery from spatial data have drawn a lot of research interest for last few decades. Traditional analytical techniques cannot easily discover new, implicit patterns, and relationships that are hidden into geographic datasets. The principle of this work is to evaluate the performance of traditional and spatial data mining techniques for analysing spatial certainty, such as spatial autocorrelation. Analysis is done by classification technique, i.e. a Decision Tree (DT) based approach on a spatial diversity coefficient. ID3 (Iterative Dichotomiser 3) algorithm is used for building the conventional and spatial decision trees. A synthetically generated spatial accident dataset and real accident dataset are used for this purpose. The spatial DT (SDT) is found to be more significant in spatial decision making.
{"title":"Analysis of Spatial Autocorrelation for Traffic Accident Data Based on Spatial Decision Tree","authors":"Bimal Ghimire, Shrutilipi Bhattacharjee, S. Ghosh","doi":"10.1109/COMGEO.2013.19","DOIUrl":"https://doi.org/10.1109/COMGEO.2013.19","url":null,"abstract":"With rapid increase of scope, coverage and volume of geographic datasets, knowledge discovery from spatial data have drawn a lot of research interest for last few decades. Traditional analytical techniques cannot easily discover new, implicit patterns, and relationships that are hidden into geographic datasets. The principle of this work is to evaluate the performance of traditional and spatial data mining techniques for analysing spatial certainty, such as spatial autocorrelation. Analysis is done by classification technique, i.e. a Decision Tree (DT) based approach on a spatial diversity coefficient. ID3 (Iterative Dichotomiser 3) algorithm is used for building the conventional and spatial decision trees. A synthetically generated spatial accident dataset and real accident dataset are used for this purpose. The spatial DT (SDT) is found to be more significant in spatial decision making.","PeriodicalId":383309,"journal":{"name":"2013 Fourth International Conference on Computing for Geospatial Research and Application","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115778820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rainfall data is often collected by measuring the amount of precipitation collected in a physical container at a site. Such methods provide precise data for those sites, but are limited in granularity to the number and placement of collection devices. We use radar images of storm systems that are publicly available and provide rainfall estimates for large regions of the globe, but at the cost of loss of precision. We present a moving object database called Storm DB that stores decibel measurements of rain clouds as moving regions, i.e., we store a single rain cloud as a region that changes shape and position over time. Storm DB is a prototype system that answers rain amount queries over a user defined time duration for any point in the continental United States. In other words, a user can ask the database for the amount of rainfall that fell at any point in the US over a specified time window. Although this single query seems straightforward, it is complicated due to the expected size of the dataset: storm clouds are numerous, radar images are available in high resolution, and our system will collect data over a large timeframe, thus, we expect the number and size of moving regions representing storm clouds to be large. To implement our proposed query, we bring together the following concepts: (i) image processing to retrieve storm clouds from radar images, (ii) interpolation mechanisms to construct moving regions with infinite temporal resolution from region snapshots, (iii) transformations to compute exact point in moving polygon queries using 2-dimensional rather than 3-dimensional algorithms, (iv) GPU algorithms for massively parallel computation of the duration that a point lies inside a moving polygon, and (v) map/reduce algorithms to provide scalability. The resulting prototype lays the groundwork for building big data solutions for moving object databases.
{"title":"Storm System Database: A Big Data Approach to Moving Object Databases","authors":"Brian Olsen, Mark McKenney","doi":"10.1109/COMGEO.2013.30","DOIUrl":"https://doi.org/10.1109/COMGEO.2013.30","url":null,"abstract":"Rainfall data is often collected by measuring the amount of precipitation collected in a physical container at a site. Such methods provide precise data for those sites, but are limited in granularity to the number and placement of collection devices. We use radar images of storm systems that are publicly available and provide rainfall estimates for large regions of the globe, but at the cost of loss of precision. We present a moving object database called Storm DB that stores decibel measurements of rain clouds as moving regions, i.e., we store a single rain cloud as a region that changes shape and position over time. Storm DB is a prototype system that answers rain amount queries over a user defined time duration for any point in the continental United States. In other words, a user can ask the database for the amount of rainfall that fell at any point in the US over a specified time window. Although this single query seems straightforward, it is complicated due to the expected size of the dataset: storm clouds are numerous, radar images are available in high resolution, and our system will collect data over a large timeframe, thus, we expect the number and size of moving regions representing storm clouds to be large. To implement our proposed query, we bring together the following concepts: (i) image processing to retrieve storm clouds from radar images, (ii) interpolation mechanisms to construct moving regions with infinite temporal resolution from region snapshots, (iii) transformations to compute exact point in moving polygon queries using 2-dimensional rather than 3-dimensional algorithms, (iv) GPU algorithms for massively parallel computation of the duration that a point lies inside a moving polygon, and (v) map/reduce algorithms to provide scalability. The resulting prototype lays the groundwork for building big data solutions for moving object databases.","PeriodicalId":383309,"journal":{"name":"2013 Fourth International Conference on Computing for Geospatial Research and Application","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116788786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Segmenting road regions from high resolution aerial images is an important yet challenging task due to large variations on road surfaces. This paper presents a simple and effective method that accurately segments road regions with a weak supervision provided by road vector data, which are publicly available. The method is based on the observation that in aerial images road edges tend to have more visible boundaries parallel to road vectors. A factorization-based segmentation algorithm is applied to an image, which accurately localize boundaries for both texture and nontexture regions. We analyze the spatial distribution of boundary pixels with respect to the road vector, and identify the road edge that separates roads from adjacent areas based on the distribution peaks. The proposed method achieves on average 90% recall and 79% precision on large aerial images covering various types of roads.
{"title":"Road Segmentation in Aerial Images by Exploiting Road Vector Data","authors":"Jiangye Yuan, A. Cheriyadat","doi":"10.1109/COMGEO.2013.4","DOIUrl":"https://doi.org/10.1109/COMGEO.2013.4","url":null,"abstract":"Segmenting road regions from high resolution aerial images is an important yet challenging task due to large variations on road surfaces. This paper presents a simple and effective method that accurately segments road regions with a weak supervision provided by road vector data, which are publicly available. The method is based on the observation that in aerial images road edges tend to have more visible boundaries parallel to road vectors. A factorization-based segmentation algorithm is applied to an image, which accurately localize boundaries for both texture and nontexture regions. We analyze the spatial distribution of boundary pixels with respect to the road vector, and identify the road edge that separates roads from adjacent areas based on the distribution peaks. The proposed method achieves on average 90% recall and 79% precision on large aerial images covering various types of roads.","PeriodicalId":383309,"journal":{"name":"2013 Fourth International Conference on Computing for Geospatial Research and Application","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129143932","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vlad Tanasescu, Christopher B. Jones, Gualtiero Colombo, M. Chorley, S. M. Allen, R. Whitaker
Venues are often described by their type and characteristics, while their level of appreciation by users is indicated through a score (star rating). However the judgement on a particular venue by an individual may more influenced by the individual's experience and personality. In psychology, the five-factor model of personality, or 'Big Five' model, describes an individual's personality in terms of openness, conscientiousness, extraversion, agreeableness and neuroticism. This work explores the notion of 'personality of a venue' by reference to personality traits research in psychology. To determine the personality of a venue, keywords are extracted from reviews of venues, and matched to terms indicative of personality traits dimensions. The work is completed with a human experiment where participants qualify venues according to a set of personality descriptors. Correlations are found between the human annotators and the automated extraction approach.
{"title":"The Personality of Venues: Places and the Five-Factors ('Big Five') Model of Personality","authors":"Vlad Tanasescu, Christopher B. Jones, Gualtiero Colombo, M. Chorley, S. M. Allen, R. Whitaker","doi":"10.1109/COMGEO.2013.12","DOIUrl":"https://doi.org/10.1109/COMGEO.2013.12","url":null,"abstract":"Venues are often described by their type and characteristics, while their level of appreciation by users is indicated through a score (star rating). However the judgement on a particular venue by an individual may more influenced by the individual's experience and personality. In psychology, the five-factor model of personality, or 'Big Five' model, describes an individual's personality in terms of openness, conscientiousness, extraversion, agreeableness and neuroticism. This work explores the notion of 'personality of a venue' by reference to personality traits research in psychology. To determine the personality of a venue, keywords are extracted from reviews of venues, and matched to terms indicative of personality traits dimensions. The work is completed with a human experiment where participants qualify venues according to a set of personality descriptors. Correlations are found between the human annotators and the automated extraction approach.","PeriodicalId":383309,"journal":{"name":"2013 Fourth International Conference on Computing for Geospatial Research and Application","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126069338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Current desktop-GIS software cannot answer users' spatial questions directly. The GIS functionality is hard to identify and use without specific training of GIS skills because of the complex hierarchical organization and the gap between users' spatial thinking and systems' implement descriptions. In order to bridge this gap, we propose a semantic framework for designing a question-based user interface that integrates different levels of ontologies (spatial concept ontology, domain ontology and task ontology) to guide the process of extracting the core spatial concepts and translating them into a set of equivalent computational or operational GIS tasks. We also list some typical spatial questions that might be posed for spatial analysis and computation. The principle introduced in this paper could be applied not only to desktop-GIS software but also to web map services. The semantic framework would be useful to enhance the ability of spatial reasoning in web search engines (e.g. Google semantic search) and answering questions in location-based services as well (e.g. iPhone Siri assistant).
{"title":"Asking Spatial Questions to Identify GIS Functionality","authors":"Song Gao, Michael F. Goochild","doi":"10.1109/COMGEO.2013.18","DOIUrl":"https://doi.org/10.1109/COMGEO.2013.18","url":null,"abstract":"Current desktop-GIS software cannot answer users' spatial questions directly. The GIS functionality is hard to identify and use without specific training of GIS skills because of the complex hierarchical organization and the gap between users' spatial thinking and systems' implement descriptions. In order to bridge this gap, we propose a semantic framework for designing a question-based user interface that integrates different levels of ontologies (spatial concept ontology, domain ontology and task ontology) to guide the process of extracting the core spatial concepts and translating them into a set of equivalent computational or operational GIS tasks. We also list some typical spatial questions that might be posed for spatial analysis and computation. The principle introduced in this paper could be applied not only to desktop-GIS software but also to web map services. The semantic framework would be useful to enhance the ability of spatial reasoning in web search engines (e.g. Google semantic search) and answering questions in location-based services as well (e.g. iPhone Siri assistant).","PeriodicalId":383309,"journal":{"name":"2013 Fourth International Conference on Computing for Geospatial Research and Application","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134537773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jeremy Birnbaum, H. Meng, Jeong-Hyon Hwang, C. Lawson
The recent increase in the use of GPS-enabled devices has introduced a new demand for efficiently storing trajectory data. In this paper, we present a new technique that has a higher compression ratio for trajectory data than existing solutions. This technique splits trajectories into sub-trajectories according to the similarities among them. For each collection of similar sub-trajectories, our technique stores only one sub-trajectory's spatial data. Each sub-trajectory is then expressed as a mapping between itself and a previous sub-trajectory. In general, these mappings can be highly compressed due to a strong correlation between the time values of trajectories. This paper presents evaluation results that show the superiority of our technique over previous solutions.
{"title":"Similarity-Based Compression of GPS Trajectory Data","authors":"Jeremy Birnbaum, H. Meng, Jeong-Hyon Hwang, C. Lawson","doi":"10.1109/COMGEO.2013.15","DOIUrl":"https://doi.org/10.1109/COMGEO.2013.15","url":null,"abstract":"The recent increase in the use of GPS-enabled devices has introduced a new demand for efficiently storing trajectory data. In this paper, we present a new technique that has a higher compression ratio for trajectory data than existing solutions. This technique splits trajectories into sub-trajectories according to the similarities among them. For each collection of similar sub-trajectories, our technique stores only one sub-trajectory's spatial data. Each sub-trajectory is then expressed as a mapping between itself and a previous sub-trajectory. In general, these mappings can be highly compressed due to a strong correlation between the time values of trajectories. This paper presents evaluation results that show the superiority of our technique over previous solutions.","PeriodicalId":383309,"journal":{"name":"2013 Fourth International Conference on Computing for Geospatial Research and Application","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126781083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jonathan Muckell, Paul W. Olsen, Jeong-Hyon Hwang, S. Ravi, C. Lawson
Trajectory compression algorithms eliminate redundant information in the history of a moving object. Such compression enables efficient transmission, storage, and processing of trajectory data. Although a number of compression algorithms have been proposed in the literature, no common benchmarking platform for evaluating their effectiveness exists. This paper presents a benchmarking framework for efficiently, conveniently, and accurately comparing trajectory compression algorithms. This framework supports various compression algorithms and metrics defined in the literature, as well as three synthetic trajectory generators that have different trade-offs. It also has a highly extensible architecture that facilitates the incorporation of new compression algorithms, evaluation metrics, and trajectory data generators. This paper provides a comprehensive overview of trajectory compression algorithms, evaluation metrics and data generators in conjunction with detailed discussions on their unique benefits and relevant application scenarios. Furthermore, this paper describes challenges that arise in the design and implementation of the above framework and our approaches to tackling these challenges. Finally, this paper presents evaluation results that demonstrate the utility of the benchmarking framework.
{"title":"A Framework for Efficient and Convenient Evaluation of Trajectory Compression Algorithms","authors":"Jonathan Muckell, Paul W. Olsen, Jeong-Hyon Hwang, S. Ravi, C. Lawson","doi":"10.1109/COMGEO.2013.5","DOIUrl":"https://doi.org/10.1109/COMGEO.2013.5","url":null,"abstract":"Trajectory compression algorithms eliminate redundant information in the history of a moving object. Such compression enables efficient transmission, storage, and processing of trajectory data. Although a number of compression algorithms have been proposed in the literature, no common benchmarking platform for evaluating their effectiveness exists. This paper presents a benchmarking framework for efficiently, conveniently, and accurately comparing trajectory compression algorithms. This framework supports various compression algorithms and metrics defined in the literature, as well as three synthetic trajectory generators that have different trade-offs. It also has a highly extensible architecture that facilitates the incorporation of new compression algorithms, evaluation metrics, and trajectory data generators. This paper provides a comprehensive overview of trajectory compression algorithms, evaluation metrics and data generators in conjunction with detailed discussions on their unique benefits and relevant application scenarios. Furthermore, this paper describes challenges that arise in the design and implementation of the above framework and our approaches to tackling these challenges. Finally, this paper presents evaluation results that demonstrate the utility of the benchmarking framework.","PeriodicalId":383309,"journal":{"name":"2013 Fourth International Conference on Computing for Geospatial Research and Application","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126520831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}