Volunteered Geographic Information (VGI) is currently a "hot topic" in the GIS community. The OpenStreetMap (OSM) project is one of the most popular and well supported examples of VGL Traditional measures of spatial data quality are often not applicable to OSM as in many cases it is not possible to access ground-truth spatial data for all regions mapped by OSM. We investigate to develop measures of quality for OSM which operate in an unsupervised manner without reference to a "trusted" source of ground-truth data. We provide results of analysis of OSM data from several European countries. The results highlight specific quality issues in OSM. Results of comparing OSM with ground-truth data for Ireland are also presented.
{"title":"Towards quality metrics for OpenStreetMap","authors":"P. Mooney, P. Corcoran, A. Winstanley","doi":"10.1145/1869790.1869875","DOIUrl":"https://doi.org/10.1145/1869790.1869875","url":null,"abstract":"Volunteered Geographic Information (VGI) is currently a \"hot topic\" in the GIS community. The OpenStreetMap (OSM) project is one of the most popular and well supported examples of VGL Traditional measures of spatial data quality are often not applicable to OSM as in many cases it is not possible to access ground-truth spatial data for all regions mapped by OSM. We investigate to develop measures of quality for OSM which operate in an unsupervised manner without reference to a \"trusted\" source of ground-truth data. We provide results of analysis of OSM data from several European countries. The results highlight specific quality issues in OSM. Results of comparing OSM with ground-truth data for Ireland are also presented.","PeriodicalId":359068,"journal":{"name":"ACM SIGSPATIAL International Workshop on Advances in Geographic Information Systems","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132622701","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Raster maps contain valuable road information, which is especially important for the areas where road vector data are otherwise not readily accessible. However, converting the road information in raster maps to road vector data usually requires significant user effort to achieve high accuracy. In this demo, we present Strabo, which is a system that extracts road vector data from heterogeneous raster maps. We demonstrate Strabo's fully automatic technique for extracting road vector data from raster maps with good image quality and the semi-automatic technique for handling raster maps with poor image quality. We show that Strabo requires minimal user input for extracting road vector data from raster maps with varying map complexity (i.e., overlapping features in maps) and image quality.
{"title":"Strabo: a system for extracting road vector data from raster maps","authors":"Yao-Yi Chiang, Craig A. Knoblock","doi":"10.1145/1869790.1869889","DOIUrl":"https://doi.org/10.1145/1869790.1869889","url":null,"abstract":"Raster maps contain valuable road information, which is especially important for the areas where road vector data are otherwise not readily accessible. However, converting the road information in raster maps to road vector data usually requires significant user effort to achieve high accuracy. In this demo, we present Strabo, which is a system that extracts road vector data from heterogeneous raster maps. We demonstrate Strabo's fully automatic technique for extracting road vector data from raster maps with good image quality and the semi-automatic technique for handling raster maps with poor image quality. We show that Strabo requires minimal user input for extracting road vector data from raster maps with varying map complexity (i.e., overlapping features in maps) and image quality.","PeriodicalId":359068,"journal":{"name":"ACM SIGSPATIAL International Workshop on Advances in Geographic Information Systems","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133773635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present an optimization approach to simplify sets of building footprints represented as polygons. We simplify each polygonal ring by selecting a subsequence of its original edges; the vertices of the simplified ring are defined by intersections of consecutive (and possibly extended) edges in the selected sequence. Our aim is to minimize the number of all output edges subject to a user-defined error tolerance. Since we earlier showed that the problem is NP-hard when requiring non-intersecting simple polygons as output, we cannot hope for an efficient, exact algorithm. Therefore, we present an efficient algorithm for a relaxed problem and an integer program (IP) that allows us to solve the original problem with existing software. Our IP is large, since it has O(m6) constraints, where m is the number of input edges. In order to keep the running time small, we first consider a subset of only O(m) constraints. The choice of the constraints ensures some basic properties of the solution. Constraints that were neglected are added during optimization whenever they become violated by a new solution encountered. Using this approach we simplified a set of 144 buildings with a total of 2056 edges in 4.1 seconds on a standard desktop PC; the simplified building set contained 762 edges. During optimization, the number of constraints increased by a mere 13%. We also show how to apply cartographic quality measures in our method and discuss their effects on examples.
{"title":"Optimal and topologically safe simplification of building footprints","authors":"J. Haunert, A. Wolff","doi":"10.1145/1869790.1869819","DOIUrl":"https://doi.org/10.1145/1869790.1869819","url":null,"abstract":"We present an optimization approach to simplify sets of building footprints represented as polygons. We simplify each polygonal ring by selecting a subsequence of its original edges; the vertices of the simplified ring are defined by intersections of consecutive (and possibly extended) edges in the selected sequence. Our aim is to minimize the number of all output edges subject to a user-defined error tolerance. Since we earlier showed that the problem is NP-hard when requiring non-intersecting simple polygons as output, we cannot hope for an efficient, exact algorithm. Therefore, we present an efficient algorithm for a relaxed problem and an integer program (IP) that allows us to solve the original problem with existing software. Our IP is large, since it has O(m6) constraints, where m is the number of input edges. In order to keep the running time small, we first consider a subset of only O(m) constraints. The choice of the constraints ensures some basic properties of the solution. Constraints that were neglected are added during optimization whenever they become violated by a new solution encountered. Using this approach we simplified a set of 144 buildings with a total of 2056 edges in 4.1 seconds on a standard desktop PC; the simplified building set contained 762 edges. During optimization, the number of constraints increased by a mere 13%. We also show how to apply cartographic quality measures in our method and discuss their effects on examples.","PeriodicalId":359068,"journal":{"name":"ACM SIGSPATIAL International Workshop on Advances in Geographic Information Systems","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133718026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Luis Manuel Vilches Blázquez, B. Villazón-Terrazas, Víctor Saquicela, A. D. León, Óscar Corcho, Asunción Gómez-Pérez
In this paper we present the process that has been followed for the development of an application that makes use of several heterogeneous Spanish public datasets that are related to three themes of INSPIRE Directive, specifically Administrative Units, Hydrography, and Statistical Units. Our application aims at analysing existing relations between the Spanish coastal area and different statistical variables such as population, unemployment, dwelling, industry, and building trade. Besides providing methodological guidelines for the generation, publishing and exploitation of Linked Data from such datasets, we provide an important innovation with respect to other similar processes followed in other initiatives by dealing with the geometrical information of features.
{"title":"GeoLinked data and INSPIRE through an application case","authors":"Luis Manuel Vilches Blázquez, B. Villazón-Terrazas, Víctor Saquicela, A. D. León, Óscar Corcho, Asunción Gómez-Pérez","doi":"10.1145/1869790.1869858","DOIUrl":"https://doi.org/10.1145/1869790.1869858","url":null,"abstract":"In this paper we present the process that has been followed for the development of an application that makes use of several heterogeneous Spanish public datasets that are related to three themes of INSPIRE Directive, specifically Administrative Units, Hydrography, and Statistical Units. Our application aims at analysing existing relations between the Spanish coastal area and different statistical variables such as population, unemployment, dwelling, industry, and building trade. Besides providing methodological guidelines for the generation, publishing and exploitation of Linked Data from such datasets, we provide an important innovation with respect to other similar processes followed in other initiatives by dealing with the geometrical information of features.","PeriodicalId":359068,"journal":{"name":"ACM SIGSPATIAL International Workshop on Advances in Geographic Information Systems","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114924959","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we study the research issues in realizing location recommendation services for large-scale location-based social networks, by exploiting the social and geographical characteristics of users and locations/places. Through our analysis on a dataset collected from Foursquare, a popular location-based social networking system, we observe that there exists strong social and geospatial ties among users and their favorite locations/places in the system. Accordingly, we develop a friend-based collaborative filtering (FCF) approach for location recommendation based on collaborative ratings of places made by social friends. Moreover, we propose a variant of FCF technique, namely Geo-Measured FCF (GM-FCF), based on heuristics derived from observed geospatial characteristics in the Foursquare dataset. Finally, the evaluation results show that the proposed family of FCF techniques holds comparable recommendation effectiveness against the state-of-the-art recommendation algorithms, while incurring significantly lower computational overhead. Meanwhile, the GM-FCF provides additional flexibility in tradeoff between recommendation effectiveness and computational overhead.
{"title":"Location recommendation for location-based social networks","authors":"Mao Ye, Peifeng Yin, Wang-Chien Lee","doi":"10.1145/1869790.1869861","DOIUrl":"https://doi.org/10.1145/1869790.1869861","url":null,"abstract":"In this paper, we study the research issues in realizing location recommendation services for large-scale location-based social networks, by exploiting the social and geographical characteristics of users and locations/places. Through our analysis on a dataset collected from Foursquare, a popular location-based social networking system, we observe that there exists strong social and geospatial ties among users and their favorite locations/places in the system. Accordingly, we develop a friend-based collaborative filtering (FCF) approach for location recommendation based on collaborative ratings of places made by social friends. Moreover, we propose a variant of FCF technique, namely Geo-Measured FCF (GM-FCF), based on heuristics derived from observed geospatial characteristics in the Foursquare dataset. Finally, the evaluation results show that the proposed family of FCF techniques holds comparable recommendation effectiveness against the state-of-the-art recommendation algorithms, while incurring significantly lower computational overhead. Meanwhile, the GM-FCF provides additional flexibility in tradeoff between recommendation effectiveness and computational overhead.","PeriodicalId":359068,"journal":{"name":"ACM SIGSPATIAL International Workshop on Advances in Geographic Information Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130198088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the wide usage of location tracking systems, continuously tracking relationships among moving objects over their location changes is possible and important to many real applications. This paper proposes a novel continuous location-based query, called the continuous top-k unsafe moving objects query or CTUO. This query continuously monitors the k most unsafe moving objects, where the unsafety of an object (protectee) is defined by the difference between its safety requirement and the protection provided by protection forces (protectors) around it. Compared with the traditional top-k queries where the score of an object represents its own characteristics, CTUO describes the relationships between protectees and protectors, which introduces computational challenges since naively all objects should be inspected to answer such a query. To avoid this, two efficient algorithms, GridPrune and GridPrune-Pro, are proposed based on the basic pruning technology from the Threshold Algorithm. Experiments show that the proposed algorithms outperform the naive solution with nearly two orders of magnitude on I/O savings.
{"title":"On continuous monitoring top-k unsafe moving objects","authors":"Jian Wen, V. Tsotras","doi":"10.1145/1869790.1869849","DOIUrl":"https://doi.org/10.1145/1869790.1869849","url":null,"abstract":"With the wide usage of location tracking systems, continuously tracking relationships among moving objects over their location changes is possible and important to many real applications. This paper proposes a novel continuous location-based query, called the continuous top-k unsafe moving objects query or CTUO. This query continuously monitors the k most unsafe moving objects, where the unsafety of an object (protectee) is defined by the difference between its safety requirement and the protection provided by protection forces (protectors) around it. Compared with the traditional top-k queries where the score of an object represents its own characteristics, CTUO describes the relationships between protectees and protectors, which introduces computational challenges since naively all objects should be inspected to answer such a query. To avoid this, two efficient algorithms, GridPrune and GridPrune-Pro, are proposed based on the basic pruning technology from the Threshold Algorithm. Experiments show that the proposed algorithms outperform the naive solution with nearly two orders of magnitude on I/O savings.","PeriodicalId":359068,"journal":{"name":"ACM SIGSPATIAL International Workshop on Advances in Geographic Information Systems","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129258310","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Adelfio, Michael D. Lieberman, H. Samet, K. Firozvi
Ontuition, a system for mapping ontologies, is presented. Transforming data to a usable format for Ontuition involves recognizing and resolving data values corresponding to concepts in multiple ontological domains. In particular, for datasets with a geographic component an attempt is made to identify and extract enough spatio-textual data that specific lat/long values to dataset entries can be assigned. Next, a gazetteer is used to transform the textually-specified locations into lat/long values that can be displayed on a map. Non-spatial ontological concepts are also discovered. This methodology is applied to the National Library of Medicine's very popular clinical trials website (http://clinicaltrials.gov/) whose users are generally interested in locating trials near where they live. The trials are specified using XML files. The location data is extracted and coupled with a disease ontology to enable general queries on the data with the result being of use to a very large group of people. The goal is to do this automatically for such ontology datasets with a locational component.
{"title":"Ontuition: intuitive data exploration via ontology navigation","authors":"M. Adelfio, Michael D. Lieberman, H. Samet, K. Firozvi","doi":"10.1145/1869790.1869887","DOIUrl":"https://doi.org/10.1145/1869790.1869887","url":null,"abstract":"Ontuition, a system for mapping ontologies, is presented. Transforming data to a usable format for Ontuition involves recognizing and resolving data values corresponding to concepts in multiple ontological domains. In particular, for datasets with a geographic component an attempt is made to identify and extract enough spatio-textual data that specific lat/long values to dataset entries can be assigned. Next, a gazetteer is used to transform the textually-specified locations into lat/long values that can be displayed on a map. Non-spatial ontological concepts are also discovered. This methodology is applied to the National Library of Medicine's very popular clinical trials website (http://clinicaltrials.gov/) whose users are generally interested in locating trials near where they live. The trials are specified using XML files. The location data is extracted and coupled with a disease ontology to enable general queries on the data with the result being of use to a very large group of people. The goal is to do this automatically for such ontology datasets with a locational component.","PeriodicalId":359068,"journal":{"name":"ACM SIGSPATIAL International Workshop on Advances in Geographic Information Systems","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134027537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhixian Yan, Lazar Spremic, D. Chakraborty, C. Parent, S. Spaccapietra, K. Aberer
With the prevalence of GPS-embedded mobile devices, enormous amounts of mobility data are being collected in the form of trajectory - a stream of (x,y,t) points. Such trajectories are of heterogeneous entities - vehicles, people, animals, parcels etc. Most applications primarily analyze raw trajectory data and extract geometric patterns. Real-life applications however, need a far more comprehensive, semantic representation of trajectories. This paper demonstrates the automatic construction and visualization capabilities of SeMiTri - a system we built that exploits 3rd party information sources containing geographic information, to semantically enrich trajectories. The construction stack encapsulates several spatio-temporal data integration and mining techniques to automatically compute and annotate all meaningful parts of heterogeneous trajectories. The visualization interface exhibits different levels of data abstraction, from low-level raw trajectories (i.e. the initial GPS trace) to high-level semantic trajectories (i.e. the sequence of interesting places where moving objects have passed and/or stayed).
{"title":"Automatic construction and multi-level visualization of semantic trajectories","authors":"Zhixian Yan, Lazar Spremic, D. Chakraborty, C. Parent, S. Spaccapietra, K. Aberer","doi":"10.1145/1869790.1869879","DOIUrl":"https://doi.org/10.1145/1869790.1869879","url":null,"abstract":"With the prevalence of GPS-embedded mobile devices, enormous amounts of mobility data are being collected in the form of trajectory - a stream of (x,y,t) points. Such trajectories are of heterogeneous entities - vehicles, people, animals, parcels etc. Most applications primarily analyze raw trajectory data and extract geometric patterns. Real-life applications however, need a far more comprehensive, semantic representation of trajectories. This paper demonstrates the automatic construction and visualization capabilities of SeMiTri - a system we built that exploits 3rd party information sources containing geographic information, to semantically enrich trajectories. The construction stack encapsulates several spatio-temporal data integration and mining techniques to automatically compute and annotate all meaningful parts of heterogeneous trajectories. The visualization interface exhibits different levels of data abstraction, from low-level raw trajectories (i.e. the initial GPS trace) to high-level semantic trajectories (i.e. the sequence of interesting places where moving objects have passed and/or stayed).","PeriodicalId":359068,"journal":{"name":"ACM SIGSPATIAL International Workshop on Advances in Geographic Information Systems","volume":"287 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131891925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
GPS-equipped taxis can be regarded as mobile sensors probing traffic flows on road surfaces, and taxi drivers are usually experienced in finding the fastest (quickest) route to a destination based on their knowledge. In this paper, we mine smart driving directions from the historical GPS trajectories of a large number of taxis, and provide a user with the practically fastest route to a given destination at a given departure time. In our approach, we propose a time-dependent landmark graph, where a node (landmark) is a road segment frequently traversed by taxis, to model the intelligence of taxi drivers and the properties of dynamic road networks. Then, a Variance-Entropy-Based Clustering approach is devised to estimate the distribution of travel time between two landmarks in different time slots. Based on this graph, we design a two-stage routing algorithm to compute the practically fastest route. We build our system based on a real-world trajectory dataset generated by over 33,000 taxis in a period of 3 months, and evaluate the system by conducting both synthetic experiments and in-the-field evaluations. As a result, 60-70% of the routes suggested by our method are faster than the competing methods, and 20% of the routes share the same results. On average, 50% of our routes are at least 20% faster than the competing approaches.
{"title":"T-drive: driving directions based on taxi trajectories","authors":"Jing Yuan, Yu Zheng, Chengyang Zhang, Wenlei Xie, Xing Xie, Guangzhong Sun, Y. Huang","doi":"10.1145/1869790.1869807","DOIUrl":"https://doi.org/10.1145/1869790.1869807","url":null,"abstract":"GPS-equipped taxis can be regarded as mobile sensors probing traffic flows on road surfaces, and taxi drivers are usually experienced in finding the fastest (quickest) route to a destination based on their knowledge. In this paper, we mine smart driving directions from the historical GPS trajectories of a large number of taxis, and provide a user with the practically fastest route to a given destination at a given departure time. In our approach, we propose a time-dependent landmark graph, where a node (landmark) is a road segment frequently traversed by taxis, to model the intelligence of taxi drivers and the properties of dynamic road networks. Then, a Variance-Entropy-Based Clustering approach is devised to estimate the distribution of travel time between two landmarks in different time slots. Based on this graph, we design a two-stage routing algorithm to compute the practically fastest route. We build our system based on a real-world trajectory dataset generated by over 33,000 taxis in a period of 3 months, and evaluate the system by conducting both synthetic experiments and in-the-field evaluations. As a result, 60-70% of the routes suggested by our method are faster than the competing methods, and 20% of the routes share the same results. On average, 50% of our routes are at least 20% faster than the competing approaches.","PeriodicalId":359068,"journal":{"name":"ACM SIGSPATIAL International Workshop on Advances in Geographic Information Systems","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133431780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Buchin, A. Driemel, M. V. Kreveld, Vera Sacristán Adinolfi
In this paper we address the problem of segmenting a trajectory such that each segment is in some sense homogeneous. We formally define different spatio-temporal criteria under which a trajectory can be homogeneous, including location, heading, speed, velocity, curvature, sinuosity, and curviness. We present a framework that allows us to segment any trajectory into a minimum number of segments under any of these criteria, or any combination of these criteria. In this framework, the segmentation problem can generally be solved in O(n log n) time, where n is the number of edges of the trajectory to be segmented.
{"title":"An algorithmic framework for segmenting trajectories based on spatio-temporal criteria","authors":"M. Buchin, A. Driemel, M. V. Kreveld, Vera Sacristán Adinolfi","doi":"10.1145/1869790.1869821","DOIUrl":"https://doi.org/10.1145/1869790.1869821","url":null,"abstract":"In this paper we address the problem of segmenting a trajectory such that each segment is in some sense homogeneous. We formally define different spatio-temporal criteria under which a trajectory can be homogeneous, including location, heading, speed, velocity, curvature, sinuosity, and curviness. We present a framework that allows us to segment any trajectory into a minimum number of segments under any of these criteria, or any combination of these criteria. In this framework, the segmentation problem can generally be solved in O(n log n) time, where n is the number of edges of the trajectory to be segmented.","PeriodicalId":359068,"journal":{"name":"ACM SIGSPATIAL International Workshop on Advances in Geographic Information Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133156421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}