An unprecedented number of user-generated videos (UGVs) are currently being collected by mobile devices, however, such unstructured data are very hard to index and search. Due to recent development, UGVs can be geo-tagged, e.g., GPS locations and compass directions, at the acquisition time at a very fine spatial granularity. Ideally, each video frame can be tagged by the spatial extent of its coverage area, termed Field-Of-View (FOV). In this paper, we focus on the challenges of spatial indexing and querying of FOVs in a large repository. Since FOVs contain both location and orientation information, and their distribution is non-uniform, conventional spatial indexes (e.g., R-tree, Grid) cannot index them efficiently. We propose a class of new R-tree-based index structures that effectively harness FOVs' camera locations, orientations and view-distances, in tandem, for both filtering and optimization. In addition, we present novel search strategies and algorithms for efficient range and directional queries on FOVs utilizing our indexes. Our experiments with a real-world dataset and a large synthetic video dataset (over 30 years worth of videos) demonstrate the scalability and efficiency of our proposed indexes and search algorithms and their superiority over the competitors.
{"title":"An efficient index structure for large-scale geo-tagged video databases","authors":"Ying Lu, C. Shahabi, S. H. Kim","doi":"10.1145/2666310.2666480","DOIUrl":"https://doi.org/10.1145/2666310.2666480","url":null,"abstract":"An unprecedented number of user-generated videos (UGVs) are currently being collected by mobile devices, however, such unstructured data are very hard to index and search. Due to recent development, UGVs can be geo-tagged, e.g., GPS locations and compass directions, at the acquisition time at a very fine spatial granularity. Ideally, each video frame can be tagged by the spatial extent of its coverage area, termed Field-Of-View (FOV). In this paper, we focus on the challenges of spatial indexing and querying of FOVs in a large repository. Since FOVs contain both location and orientation information, and their distribution is non-uniform, conventional spatial indexes (e.g., R-tree, Grid) cannot index them efficiently. We propose a class of new R-tree-based index structures that effectively harness FOVs' camera locations, orientations and view-distances, in tandem, for both filtering and optimization. In addition, we present novel search strategies and algorithms for efficient range and directional queries on FOVs utilizing our indexes. Our experiments with a real-world dataset and a large synthetic video dataset (over 30 years worth of videos) demonstrate the scalability and efficiency of our proposed indexes and search algorithms and their superiority over the competitors.","PeriodicalId":153031,"journal":{"name":"Proceedings of the 22nd ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134148316","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Ba, Sébastien Montenez, T. Abdessalem, P. Senellart
A number of applications deal with monitoring moving objects: cars, aircrafts, ships, persons, etc. Traditionally, this requires capturing data from sensor networks, image or video analysis, or using other application-specific resources. We show in this demonstration paper how Web content can be exploited instead to gather information (trajectories, metadata) about moving objects. As this content is marred with uncertainty and inconsistency, we develop a methodology for estimating uncertainty and filtering the resulting data. We present as an application a demonstration of a system that constructs trajectories of ships from social networking data, presenting to a user inferred trajectories, meta-information, as well as uncertainty levels on extracted information and trustworthiness of data providers.
{"title":"Monitoring moving objects using uncertain web data","authors":"M. Ba, Sébastien Montenez, T. Abdessalem, P. Senellart","doi":"10.1145/2666310.2666370","DOIUrl":"https://doi.org/10.1145/2666310.2666370","url":null,"abstract":"A number of applications deal with monitoring moving objects: cars, aircrafts, ships, persons, etc. Traditionally, this requires capturing data from sensor networks, image or video analysis, or using other application-specific resources. We show in this demonstration paper how Web content can be exploited instead to gather information (trajectories, metadata) about moving objects. As this content is marred with uncertainty and inconsistency, we develop a methodology for estimating uncertainty and filtering the resulting data. We present as an application a demonstration of a system that constructs trajectories of ships from social networking data, presenting to a user inferred trajectories, meta-information, as well as uncertainty levels on extracted information and trustworthiness of data providers.","PeriodicalId":153031,"journal":{"name":"Proceedings of the 22nd ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121959400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. Ghanem, A. Magdy, Mashaal Musleh, Sohaib Ghani, M. Mokbel
In the last few years, Twitter data has become so popular that it is used in a rich set of new applications, e.g., real-time event detection, demographic analysis, and news extraction. As user-generated data, the plethora of Twitter data motivates several analysis tasks that make use of activeness of 271+ Million Twitter users. This demonstration presents VisCAT; a tool for aggregating and visualizing categorical attributes in Twitter data. VisCAT outputs visual reports that provide spatial analysis through interactive map-based visualization for categorical attributes---such as tweet language or source operating system---at different zoom levels. The visual reports are built based on user-selected data in arbitrary spatial and temporal ranges. For this data, VisCAT employs a hierarchical spatial data structure to materialize the count of each category at multiple spatial levels. We demonstrate VisCAT, using real Twitter dataset. The demonstration includes use cases on tweet language and tweet source attributes in the region of Gulf Arab states, which can be used for deducing thoughtful conclusions on demographics and living levels in local societies.
{"title":"VisCAT: spatio-temporal visualization and aggregation of categorical attributes in twitter data","authors":"T. Ghanem, A. Magdy, Mashaal Musleh, Sohaib Ghani, M. Mokbel","doi":"10.1145/2666310.2666363","DOIUrl":"https://doi.org/10.1145/2666310.2666363","url":null,"abstract":"In the last few years, Twitter data has become so popular that it is used in a rich set of new applications, e.g., real-time event detection, demographic analysis, and news extraction. As user-generated data, the plethora of Twitter data motivates several analysis tasks that make use of activeness of 271+ Million Twitter users. This demonstration presents VisCAT; a tool for aggregating and visualizing categorical attributes in Twitter data. VisCAT outputs visual reports that provide spatial analysis through interactive map-based visualization for categorical attributes---such as tweet language or source operating system---at different zoom levels. The visual reports are built based on user-selected data in arbitrary spatial and temporal ranges. For this data, VisCAT employs a hierarchical spatial data structure to materialize the count of each category at multiple spatial levels. We demonstrate VisCAT, using real Twitter dataset. The demonstration includes use cases on tweet language and tweet source attributes in the region of Gulf Arab states, which can be used for deducing thoughtful conclusions on demographics and living levels in local societies.","PeriodicalId":153031,"journal":{"name":"Proceedings of the 22nd ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems","volume":"83 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127422695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
O. Zavala-Romero, E. Chassignet, J. Zavala‐Hidalgo, P. Velissariou, Harshul Pandav, A. Meyer-Bäse
OWGIS version 2.0 is an open source Java and JavaScript application that builds easily configurable Web GIS sites for desktop and mobile devices. This version of OWGIS generates mobile interfaces based on HTML5 technology and can be used to create mobile applications. The style of the generated websites is modified using COMPASS, a well known CSS Authoring Framework. In addition, OWGIS uses several Open Geospatial Consortium standards to request data from the most common map servers, such as GeoServer. It is also able to request data from ncWMS servers allowing the display of 4D data from NetCDF files. This application is configured by XML files that define which layers, geographic datasets, are displayed on the Web GIS sites. Among other features, OWGIS allows for animations; vertical profiles and vertical transects; different color palettes; dynamic maps; the ability to download data, and display text in multiple languages. OWGIS users are mainly scientists in the oceanography, meteorology and climate fields.
{"title":"OWGIS 2.0: Open source Java application that builds web GIS interfaces for desktop and mobile devices","authors":"O. Zavala-Romero, E. Chassignet, J. Zavala‐Hidalgo, P. Velissariou, Harshul Pandav, A. Meyer-Bäse","doi":"10.1145/2666310.2666381","DOIUrl":"https://doi.org/10.1145/2666310.2666381","url":null,"abstract":"OWGIS version 2.0 is an open source Java and JavaScript application that builds easily configurable Web GIS sites for desktop and mobile devices. This version of OWGIS generates mobile interfaces based on HTML5 technology and can be used to create mobile applications. The style of the generated websites is modified using COMPASS, a well known CSS Authoring Framework. In addition, OWGIS uses several Open Geospatial Consortium standards to request data from the most common map servers, such as GeoServer. It is also able to request data from ncWMS servers allowing the display of 4D data from NetCDF files. This application is configured by XML files that define which layers, geographic datasets, are displayed on the Web GIS sites. Among other features, OWGIS allows for animations; vertical profiles and vertical transects; different color palettes; dynamic maps; the ability to download data, and display text in multiple languages. OWGIS users are mainly scientists in the oceanography, meteorology and climate fields.","PeriodicalId":153031,"journal":{"name":"Proceedings of the 22nd ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129410818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In a previous work we showed that the knowledge of the spatial reader scope of a news source, that is the geographical location for which its content has been primarily produced, plays an important role in disambiguating toponyms in news articles. The determination of the spatial reader scope of a news source is based on the notion of a local lexicon, which for a location l is defined as a set of concepts, such as names of people, landmarks and historical events, that are spatially related to l. The automatic determination of a local lexicon for a wide range of locations is key to implementing an efficient geotagged news retrieval system, such as NewsStand and its variants TwitterStand and PhotoStand. The major research challenge here is the measurement of the spatial relatedness of a concept to a location. Our previous work resorted to a similarity measure that used the geographic coordinates attached to the Wikipedia articles to find concepts that are spatially related to a certain location. Clearly, this results in local lexicons that mostly include spatial concepts, although non-spatial concepts, such as people or food specialties, are key elements of the identity of a location. In this paper, we explore a set of graph-based similarity measures to determine a local lexicon of a location from Wikipedia without using any spatial clues, based on the observation that the spatial relatedness of a concept to a location is hidden in the Wikipedia link structure. Our evaluation on the local lexicons of 1,200 locations indicates that our observation is well-founded. Additionally, we provide experiments on standard datasets that show that SynRank, one of the measures that we propose for computing the spatial relatedness of a concept to a location, rivals existing similarity measures in determining the semantic relatedness between wikipedia articles.
{"title":"Uncovering the spatial relatedness in Wikipedia","authors":"Gianluca Quercini, H. Samet","doi":"10.1145/2666310.2666398","DOIUrl":"https://doi.org/10.1145/2666310.2666398","url":null,"abstract":"In a previous work we showed that the knowledge of the spatial reader scope of a news source, that is the geographical location for which its content has been primarily produced, plays an important role in disambiguating toponyms in news articles. The determination of the spatial reader scope of a news source is based on the notion of a local lexicon, which for a location l is defined as a set of concepts, such as names of people, landmarks and historical events, that are spatially related to l. The automatic determination of a local lexicon for a wide range of locations is key to implementing an efficient geotagged news retrieval system, such as NewsStand and its variants TwitterStand and PhotoStand. The major research challenge here is the measurement of the spatial relatedness of a concept to a location. Our previous work resorted to a similarity measure that used the geographic coordinates attached to the Wikipedia articles to find concepts that are spatially related to a certain location. Clearly, this results in local lexicons that mostly include spatial concepts, although non-spatial concepts, such as people or food specialties, are key elements of the identity of a location. In this paper, we explore a set of graph-based similarity measures to determine a local lexicon of a location from Wikipedia without using any spatial clues, based on the observation that the spatial relatedness of a concept to a location is hidden in the Wikipedia link structure. Our evaluation on the local lexicons of 1,200 locations indicates that our observation is well-founded. Additionally, we provide experiments on standard datasets that show that SynRank, one of the measures that we propose for computing the spatial relatedness of a concept to a location, rivals existing similarity measures in determining the semantic relatedness between wikipedia articles.","PeriodicalId":153031,"journal":{"name":"Proceedings of the 22nd ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117021832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ying Hu, S. Ravada, Richard Anderson, Bhuvan Bamba
With the proliferation of global positioning systems (GPS) enabled devices, a growing number of database systems are capable of storing and querying different spatial objects including points, polylines and polygons. In this paper, we present our experience with supporting one important class of spatial queries in these database systems: distance queries. For example, a traveler may want to find hotels within 500 meters of a nearby beach. In addition, this paper presents new techniques implemented in Oracle Spatial for some distance-related problems, such as the maximum distance between complex spatial objects, and the diameter, the convex hull and the minimum bounding circle of complex spatial objects. We conduct our experiments by utilizing real-world data sets and demonstrate that these distance and distance-related queries can be significantly improved.
{"title":"Distance queries for complex spatial objects in oracle spatial","authors":"Ying Hu, S. Ravada, Richard Anderson, Bhuvan Bamba","doi":"10.1145/2666310.2666385","DOIUrl":"https://doi.org/10.1145/2666310.2666385","url":null,"abstract":"With the proliferation of global positioning systems (GPS) enabled devices, a growing number of database systems are capable of storing and querying different spatial objects including points, polylines and polygons. In this paper, we present our experience with supporting one important class of spatial queries in these database systems: distance queries. For example, a traveler may want to find hotels within 500 meters of a nearby beach. In addition, this paper presents new techniques implemented in Oracle Spatial for some distance-related problems, such as the maximum distance between complex spatial objects, and the diameter, the convex hull and the minimum bounding circle of complex spatial objects. We conduct our experiments by utilizing real-world data sets and demonstrate that these distance and distance-related queries can be significantly improved.","PeriodicalId":153031,"journal":{"name":"Proceedings of the 22nd ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121669877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Location-based business review (LBBR) sites (e.g., Yelp) provide us a possibility to recommend new points of interest (POIs) for users. The geographical position and category of POIs have been considered as two major factors in modeling users' preferences. However, it is argued that the user's visiting behaviors are also affected by the attributes of POIs, which reflect the basic features of the POIs. Besides, a user may have different preference levels on the same POI with regard to different criteria. To this end, we propose a new personalized POI recommendation framework using Multi-Criteria Decision Making (MCDM). Firstly, preference models are built for the user's geographical, category, and attribute preferences. Then, an MCDM-based recommendation framework is designed to iteratively combine the user's preferences on the three criteria and select the top-N POIs as a recommendation list. Experimental results show that our framework not only outperforms the state-of-the-art POI recommendation techniques, but also provides a better trade-off mechanism for MCDM than the weighted sum approach.
{"title":"Using multi-criteria decision making for personalized point-of-interest recommendations","authors":"Yan Lyu, Chi-Yin Chow, Ran Wang, V. Lee","doi":"10.1145/2666310.2666479","DOIUrl":"https://doi.org/10.1145/2666310.2666479","url":null,"abstract":"Location-based business review (LBBR) sites (e.g., Yelp) provide us a possibility to recommend new points of interest (POIs) for users. The geographical position and category of POIs have been considered as two major factors in modeling users' preferences. However, it is argued that the user's visiting behaviors are also affected by the attributes of POIs, which reflect the basic features of the POIs. Besides, a user may have different preference levels on the same POI with regard to different criteria. To this end, we propose a new personalized POI recommendation framework using Multi-Criteria Decision Making (MCDM). Firstly, preference models are built for the user's geographical, category, and attribute preferences. Then, an MCDM-based recommendation framework is designed to iteratively combine the user's preferences on the three criteria and select the top-N POIs as a recommendation list. Experimental results show that our framework not only outperforms the state-of-the-art POI recommendation techniques, but also provides a better trade-off mechanism for MCDM than the weighted sum approach.","PeriodicalId":153031,"journal":{"name":"Proceedings of the 22nd ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115191716","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
YiRu Li, Sarah George, Craig Apfelbeck, Abdeltawab M. Hendawi, David Hazel, A. Teredesai, Mohamed H. Ali
Traditional routing services aim to save driving time by recommending the shortest path, in terms of distance or time, to travel from a start location to a given destination. However, these methods are relatively static and to a certain extent rely on traffic patterns under relatively normal conditions to calculate and recommend an appropriate route. As such, they do not necessarily translate effectively during severe weather events such as tornadoes. In these scenarios, the guiding principal is not, optimize for travel time, but rather, optimize for survivability of the event, i.e., can we recommend an evacuation route to those users inside the hazardous areas. In this demo, we present a framework for routing services for evacuating and avoiding real world severe weather threats that is able to: (1) Identify the users inside the dangerous region of a severe weather event (2) Recommend an evacuation route to guide the users out to a safe destination or shelter (3) Assure the recommended route to be one of the shortest paths after excluding the risky area (4) Maintain the flow of traffic by normalizing the evacuation on the possible safe routes. During the demo, attendees will be able to use the system interactively through its graphical user interface within a number of different scenarios. They will be able to locate the severe weather events on real time basis in any area in USA and examine detailed information about each event, to issue an evacuation query from an existing dangerous area by identifying a destination location and receiving the routing direction on their mobile devices, to issue an avoidance routing query to ask for a shortest path that avoids the dangerous region, to have an inside look into the internal system components and finally, to evaluate the overall system performance.
{"title":"Routing service with real world severe weather","authors":"YiRu Li, Sarah George, Craig Apfelbeck, Abdeltawab M. Hendawi, David Hazel, A. Teredesai, Mohamed H. Ali","doi":"10.1145/2666310.2666375","DOIUrl":"https://doi.org/10.1145/2666310.2666375","url":null,"abstract":"Traditional routing services aim to save driving time by recommending the shortest path, in terms of distance or time, to travel from a start location to a given destination. However, these methods are relatively static and to a certain extent rely on traffic patterns under relatively normal conditions to calculate and recommend an appropriate route. As such, they do not necessarily translate effectively during severe weather events such as tornadoes. In these scenarios, the guiding principal is not, optimize for travel time, but rather, optimize for survivability of the event, i.e., can we recommend an evacuation route to those users inside the hazardous areas. In this demo, we present a framework for routing services for evacuating and avoiding real world severe weather threats that is able to: (1) Identify the users inside the dangerous region of a severe weather event (2) Recommend an evacuation route to guide the users out to a safe destination or shelter (3) Assure the recommended route to be one of the shortest paths after excluding the risky area (4) Maintain the flow of traffic by normalizing the evacuation on the possible safe routes. During the demo, attendees will be able to use the system interactively through its graphical user interface within a number of different scenarios. They will be able to locate the severe weather events on real time basis in any area in USA and examine detailed information about each event, to issue an evacuation query from an existing dangerous area by identifying a destination location and receiving the routing direction on their mobile devices, to issue an avoidance routing query to ask for a shortest path that avoids the dangerous region, to have an inside look into the internal system components and finally, to evaluate the overall system performance.","PeriodicalId":153031,"journal":{"name":"Proceedings of the 22nd ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124539460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present the SnapNet, a system that provides accurate real-time map matching for cellular-based trajectories. Such coarse-grained trajectories introduce new challenges to map matching including (1) input locations that are far from the actual road segment (errors in the orders of kilometers), (2) back-and-forth transitions, and (3) highly sparse input data. SnapNet addresses these challenges by applying extensive preprocessing steps to remove the noisy locations and to handle the data sparseness. At the core of SnapNet is a novel incremental HMM algorithm that combines digital map hints and a number of heuristics to reduce the noise and provide real-time estimation. Evaluation of SnapNet in different cities covering more than 100km distance shows that it can achieve more than 90% accuracy under noisy coarse-grained input location estimates. This maps to over 97% and 34% enhancement in precision and recall respectively when compared to traditional HMM map matching algorithms. Moreover, SnapNet has a low latency of 1.2ms per location estimate.
{"title":"Accurate and efficient map matching for challenging environments","authors":"Reham Mohamed, Heba Aly, M. Youssef","doi":"10.1145/2666310.2666429","DOIUrl":"https://doi.org/10.1145/2666310.2666429","url":null,"abstract":"We present the SnapNet, a system that provides accurate real-time map matching for cellular-based trajectories. Such coarse-grained trajectories introduce new challenges to map matching including (1) input locations that are far from the actual road segment (errors in the orders of kilometers), (2) back-and-forth transitions, and (3) highly sparse input data. SnapNet addresses these challenges by applying extensive preprocessing steps to remove the noisy locations and to handle the data sparseness. At the core of SnapNet is a novel incremental HMM algorithm that combines digital map hints and a number of heuristics to reduce the noise and provide real-time estimation. Evaluation of SnapNet in different cities covering more than 100km distance shows that it can achieve more than 90% accuracy under noisy coarse-grained input location estimates. This maps to over 97% and 34% enhancement in precision and recall respectively when compared to traditional HMM map matching algorithms. Moreover, SnapNet has a low latency of 1.2ms per location estimate.","PeriodicalId":153031,"journal":{"name":"Proceedings of the 22nd ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114179757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
One of the major challenges during the process of extracting information from multiple spatio-temporal data sources of diverse data types is the matching and fusion of extracted knowledge (e.g. interesting nearby events detected from text, estimated density or flow from a set of geo-coded images). In this demonstration, we present PETRINA ("PErsonalized TRaffic INformation Analytics"), a system that provides traffic-related incident monitoring, mapping, and analytics services. In particular, we showcase two main functionalities: (1) text traffic alert validation based on traffic condition information derived from traffic camera images and (2) traffic incident correlation based on spatio-temporal proximity of different incident types (e.g., accidents and heavy traffic). Despite the fact that the images are sparse (available every three minutes), the regularity makes it possible to validate whether a text traffic alert is outdated or not, and to more accurately estimate the time elapsed and total incident time. Multiple traffic incidents can be grouped together as a single event based on the traffic incident correlation to reduce information redundancy. Such enhanced real-time traffic information enables PETRINA to offer services such as dynamic routing with traffic incident advices, spatiotemporal traffic incident visual analytics, and congestion analysis.
{"title":"Traffic incident validation and correlation using text alerts and images","authors":"W. H. Yan, J. Ong, S. Ho, Jim Cherian","doi":"10.1145/2666310.2666379","DOIUrl":"https://doi.org/10.1145/2666310.2666379","url":null,"abstract":"One of the major challenges during the process of extracting information from multiple spatio-temporal data sources of diverse data types is the matching and fusion of extracted knowledge (e.g. interesting nearby events detected from text, estimated density or flow from a set of geo-coded images). In this demonstration, we present PETRINA (\"PErsonalized TRaffic INformation Analytics\"), a system that provides traffic-related incident monitoring, mapping, and analytics services. In particular, we showcase two main functionalities: (1) text traffic alert validation based on traffic condition information derived from traffic camera images and (2) traffic incident correlation based on spatio-temporal proximity of different incident types (e.g., accidents and heavy traffic). Despite the fact that the images are sparse (available every three minutes), the regularity makes it possible to validate whether a text traffic alert is outdated or not, and to more accurately estimate the time elapsed and total incident time. Multiple traffic incidents can be grouped together as a single event based on the traffic incident correlation to reduce information redundancy. Such enhanced real-time traffic information enables PETRINA to offer services such as dynamic routing with traffic incident advices, spatiotemporal traffic incident visual analytics, and congestion analysis.","PeriodicalId":153031,"journal":{"name":"Proceedings of the 22nd ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121840277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}