Jonathan Muckell, Paul W. Olsen, Jeong-Hyon Hwang, S. Ravi, C. Lawson
Trajectory compression algorithms eliminate redundant information in the history of a moving object. Such compression enables efficient transmission, storage, and processing of trajectory data. Although a number of compression algorithms have been proposed in the literature, no common benchmarking platform for evaluating their effectiveness exists. This paper presents a benchmarking framework for efficiently, conveniently, and accurately comparing trajectory compression algorithms. This framework supports various compression algorithms and metrics defined in the literature, as well as three synthetic trajectory generators that have different trade-offs. It also has a highly extensible architecture that facilitates the incorporation of new compression algorithms, evaluation metrics, and trajectory data generators. This paper provides a comprehensive overview of trajectory compression algorithms, evaluation metrics and data generators in conjunction with detailed discussions on their unique benefits and relevant application scenarios. Furthermore, this paper describes challenges that arise in the design and implementation of the above framework and our approaches to tackling these challenges. Finally, this paper presents evaluation results that demonstrate the utility of the benchmarking framework.
{"title":"A Framework for Efficient and Convenient Evaluation of Trajectory Compression Algorithms","authors":"Jonathan Muckell, Paul W. Olsen, Jeong-Hyon Hwang, S. Ravi, C. Lawson","doi":"10.1109/COMGEO.2013.5","DOIUrl":"https://doi.org/10.1109/COMGEO.2013.5","url":null,"abstract":"Trajectory compression algorithms eliminate redundant information in the history of a moving object. Such compression enables efficient transmission, storage, and processing of trajectory data. Although a number of compression algorithms have been proposed in the literature, no common benchmarking platform for evaluating their effectiveness exists. This paper presents a benchmarking framework for efficiently, conveniently, and accurately comparing trajectory compression algorithms. This framework supports various compression algorithms and metrics defined in the literature, as well as three synthetic trajectory generators that have different trade-offs. It also has a highly extensible architecture that facilitates the incorporation of new compression algorithms, evaluation metrics, and trajectory data generators. This paper provides a comprehensive overview of trajectory compression algorithms, evaluation metrics and data generators in conjunction with detailed discussions on their unique benefits and relevant application scenarios. Furthermore, this paper describes challenges that arise in the design and implementation of the above framework and our approaches to tackling these challenges. Finally, this paper presents evaluation results that demonstrate the utility of the benchmarking framework.","PeriodicalId":383309,"journal":{"name":"2013 Fourth International Conference on Computing for Geospatial Research and Application","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126520831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this work we describe our approach to efficiently create, handle and organize large-scale Structure-from-Motion reconstructions of urban environments. For acquiring vast amounts of data, we use a Point Grey Ladybug 3 omni directional camera and a custom backpack system with a differential GPS sensor. Sparse point cloud reconstructions are generated and aligned with respect to the world in an offline process. Finally, all the data is stored in a geospatial database. We incorporate additional data from multiple crowd-sourced databases, such as maps from OpenStreetMap or images from Flickr or Instagram. We discuss how our system could be used in potential application scenarios from the area of Augmented Reality.
{"title":"Geospatial Management and Utilization of Large-Scale Urban Visual Reconstructions","authors":"Clemens Arth, Jonathan Ventura, D. Schmalstieg","doi":"10.1109/COMGEO.2013.10","DOIUrl":"https://doi.org/10.1109/COMGEO.2013.10","url":null,"abstract":"In this work we describe our approach to efficiently create, handle and organize large-scale Structure-from-Motion reconstructions of urban environments. For acquiring vast amounts of data, we use a Point Grey Ladybug 3 omni directional camera and a custom backpack system with a differential GPS sensor. Sparse point cloud reconstructions are generated and aligned with respect to the world in an offline process. Finally, all the data is stored in a geospatial database. We incorporate additional data from multiple crowd-sourced databases, such as maps from OpenStreetMap or images from Flickr or Instagram. We discuss how our system could be used in potential application scenarios from the area of Augmented Reality.","PeriodicalId":383309,"journal":{"name":"2013 Fourth International Conference on Computing for Geospatial Research and Application","volume":"33 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114100464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given: Land cover change is the result of interactions and feedbacks between processes operating at different spatial and temporal scales. As human impact on the environment becomes more pronounced, there is growing interest in understanding the effects of environmental and scocio-economic changes on landscape dynamics. Computer simulation models provide a tool for studying the causes and consequences of landscape dynamics and projecting short- and long-term landscape changes. Currently, there is a need for a model that can simulate multiple drivers of land cover change, including natural disturbances vegetation succession along with anthropogenic effects such land use transitions and land management practices. The available land cover change models typically simulate only a subset of these disturbances, which is not sufficient for realistically simulating land cover change over large heterogeneous areas. To addressing this need, we developed a novel simulator that combines two existing modeling frameworks: human-driven land use change (derived from the FORE-SCE model) with natural disturbances and vegetation dynamics (derived from the LADS model) and will incorporate novel feedbacks between human land use and natural disturbance regimes. The simulator is a raster-based, spatially explicit, stochastic computer model that combines a demand-allocation land use change model, a state-and transition for natural vegetation dynamics, and spatially explicit fire initiation and spread. The simulator is being designed to incorporate the effects of climate change, land management, and human demand on resource on land use over and natural vegetation dynamics to provide realistic, high resolution, and scenario-based land cover products. The simulator is a stand-alone program written in Visual C++ environment for use in Microsoft Windows Operating System environment, and in continuous development. This poster highlights the conceptual and technical design of the model integration.
{"title":"Coupling Simulations of Human Driven Land Use Change with Natural Vegetation Dynamics","authors":"Aashis Lamsal, Zhihua Liu, M. Wimberly","doi":"10.1109/COMGEO.2013.32","DOIUrl":"https://doi.org/10.1109/COMGEO.2013.32","url":null,"abstract":"Summary form only given: Land cover change is the result of interactions and feedbacks between processes operating at different spatial and temporal scales. As human impact on the environment becomes more pronounced, there is growing interest in understanding the effects of environmental and scocio-economic changes on landscape dynamics. Computer simulation models provide a tool for studying the causes and consequences of landscape dynamics and projecting short- and long-term landscape changes. Currently, there is a need for a model that can simulate multiple drivers of land cover change, including natural disturbances vegetation succession along with anthropogenic effects such land use transitions and land management practices. The available land cover change models typically simulate only a subset of these disturbances, which is not sufficient for realistically simulating land cover change over large heterogeneous areas. To addressing this need, we developed a novel simulator that combines two existing modeling frameworks: human-driven land use change (derived from the FORE-SCE model) with natural disturbances and vegetation dynamics (derived from the LADS model) and will incorporate novel feedbacks between human land use and natural disturbance regimes. The simulator is a raster-based, spatially explicit, stochastic computer model that combines a demand-allocation land use change model, a state-and transition for natural vegetation dynamics, and spatially explicit fire initiation and spread. The simulator is being designed to incorporate the effects of climate change, land management, and human demand on resource on land use over and natural vegetation dynamics to provide realistic, high resolution, and scenario-based land cover products. The simulator is a stand-alone program written in Visual C++ environment for use in Microsoft Windows Operating System environment, and in continuous development. This poster highlights the conceptual and technical design of the model integration.","PeriodicalId":383309,"journal":{"name":"2013 Fourth International Conference on Computing for Geospatial Research and Application","volume":"89 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114449924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Digital Terrain Models (DTMs) are widely and intensively used as a computerized mapping and modeling infrastructure representing our environment. There exist many different types of wide-coverage DTMs generated by various acquisition and production techniques, which differ significantly in terms of geometric attributes and accuracy. In aspects of quality and accuracy most studies investigate relative accuracy relying solely on coordinate-based comparison approaches that ignore the local spatial discrepancies exist in the data. Our long-term goal aims at analyzing the absolute accuracy of such models based on hierarchical feature-based spatial registration, which relies on the represented topography and morphology, taking into account local spatial discrepancies exist. This registration is the preliminary stage of the quality analysis, where a relative DTM comparison is performed to determine the accuracy of the two models. This paper focuses on the second stage of the analysis applying the same mechanism on multiple DTMs to compute the absolute accuracy based on the fact that this solution system has a high level of redundancy. The suggested approach not only qualitatively computes posteriori absolute accuracies of DTMs, usually unknown, but also thoroughly analyzes the absolute accuracies of existing local trends. The methodology is carried out by developing an accuracy computation analysis using simultaneously multiple different independent wide-coverage DTMs that describe the same relief. A comparison mechanism is employed on DTM pairs using Least Squares Adjustment (LSA) process, in which absolute accuracies are computed based on theory of errors concepts. A simulation of four synthetic DTMs is presented and analyzed to validate the feasibility of the proposed approach.
{"title":"Geostatistical Approach for Computing Absolute Vertical Accuracy of Digital Terrain Models","authors":"G. Ben-Haim, S. Dalyot, Y. Doytsher","doi":"10.1109/COMGEO.2013.6","DOIUrl":"https://doi.org/10.1109/COMGEO.2013.6","url":null,"abstract":"Digital Terrain Models (DTMs) are widely and intensively used as a computerized mapping and modeling infrastructure representing our environment. There exist many different types of wide-coverage DTMs generated by various acquisition and production techniques, which differ significantly in terms of geometric attributes and accuracy. In aspects of quality and accuracy most studies investigate relative accuracy relying solely on coordinate-based comparison approaches that ignore the local spatial discrepancies exist in the data. Our long-term goal aims at analyzing the absolute accuracy of such models based on hierarchical feature-based spatial registration, which relies on the represented topography and morphology, taking into account local spatial discrepancies exist. This registration is the preliminary stage of the quality analysis, where a relative DTM comparison is performed to determine the accuracy of the two models. This paper focuses on the second stage of the analysis applying the same mechanism on multiple DTMs to compute the absolute accuracy based on the fact that this solution system has a high level of redundancy. The suggested approach not only qualitatively computes posteriori absolute accuracies of DTMs, usually unknown, but also thoroughly analyzes the absolute accuracies of existing local trends. The methodology is carried out by developing an accuracy computation analysis using simultaneously multiple different independent wide-coverage DTMs that describe the same relief. A comparison mechanism is employed on DTM pairs using Least Squares Adjustment (LSA) process, in which absolute accuracies are computed based on theory of errors concepts. A simulation of four synthetic DTMs is presented and analyzed to validate the feasibility of the proposed approach.","PeriodicalId":383309,"journal":{"name":"2013 Fourth International Conference on Computing for Geospatial Research and Application","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126453398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
To increase performance, processor manufacturers extract parallelism through shrinking transistors and adding more of them to single-core chips and create multi-core systems. Although microprocessors performance continues to grow at an exponential rate, this approach generates too much heat and consumes too much power. These architectures not only introduce several complications but require tremendous efforts for organization of special software for parallel processing. In many cases, these difficulties are insurmountable. The programmers have to write complex code to prioritize the tasks or perform the task in parallel like extracting parallelism through threads in GPUs. One of the key issues for the programmers is how to divide the tasks in to sub-tasks. A faulty calculation may lead to increased data dependency which will slow the processor. Processor that performs more parallel operations can simultaneously increase the queuing delays. In both of the scenarios mentioned above, the relative cost of communication (also known as data transportation energy) between processing elements in microprocessor (or objects in parallel programming) is increasing relative to that of computation. This trend is resulting in larger caches for every new processor generation and more complex and costly latency tolerant mechanisms. Here we introduce a combinatorial architecture that has a unique property-multi-core running on a sequential code. This architecture can be used for both CPUs and GPUs. Some minor adjustments to a regular compiler are needed for loading. Especially, current mobile GPUs technologies are still relatively immature and require substantial improvements to enable wireless devices to perform the complex graphics-related functions. Our new architecture is more suitable for mobile GPUs/CPUs, i.e., mobile heterogeneous computing, with limited resources and relative greater performance.
{"title":"Balanced Block Design Architecture for Parallel Computing in Mobile CPUs/GPUs","authors":"G. Mani, S. Berkovich, Duoduo Liao","doi":"10.1109/COMGEO.2013.27","DOIUrl":"https://doi.org/10.1109/COMGEO.2013.27","url":null,"abstract":"To increase performance, processor manufacturers extract parallelism through shrinking transistors and adding more of them to single-core chips and create multi-core systems. Although microprocessors performance continues to grow at an exponential rate, this approach generates too much heat and consumes too much power. These architectures not only introduce several complications but require tremendous efforts for organization of special software for parallel processing. In many cases, these difficulties are insurmountable. The programmers have to write complex code to prioritize the tasks or perform the task in parallel like extracting parallelism through threads in GPUs. One of the key issues for the programmers is how to divide the tasks in to sub-tasks. A faulty calculation may lead to increased data dependency which will slow the processor. Processor that performs more parallel operations can simultaneously increase the queuing delays. In both of the scenarios mentioned above, the relative cost of communication (also known as data transportation energy) between processing elements in microprocessor (or objects in parallel programming) is increasing relative to that of computation. This trend is resulting in larger caches for every new processor generation and more complex and costly latency tolerant mechanisms. Here we introduce a combinatorial architecture that has a unique property-multi-core running on a sequential code. This architecture can be used for both CPUs and GPUs. Some minor adjustments to a regular compiler are needed for loading. Especially, current mobile GPUs technologies are still relatively immature and require substantial improvements to enable wireless devices to perform the complex graphics-related functions. Our new architecture is more suitable for mobile GPUs/CPUs, i.e., mobile heterogeneous computing, with limited resources and relative greater performance.","PeriodicalId":383309,"journal":{"name":"2013 Fourth International Conference on Computing for Geospatial Research and Application","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130227690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In recent years, statistical methods are used more and more in the field of economic and living standard, which is a very important indicator reflecting the degree of society development and peoples' living quality. This paper uses 14 economic and living indexes to analyze and evaluate economic and living standard of China from year 2003 and 2008. The results show that three components can be extracted as 'income and expense capability', 'asset investment situation' and 'whole economical strength (poor or rich)' from these 14 indexes. Besides, all study areas in this study can be divided into three groups according to their living standard, which is related with development degree and geographical location.
{"title":"Application of Statistical Methods in City Economic and Living Standard Study: A Case of China (2003 -- 2008)","authors":"Wenjing Cao","doi":"10.1109/COMGEO.2013.21","DOIUrl":"https://doi.org/10.1109/COMGEO.2013.21","url":null,"abstract":"In recent years, statistical methods are used more and more in the field of economic and living standard, which is a very important indicator reflecting the degree of society development and peoples' living quality. This paper uses 14 economic and living indexes to analyze and evaluate economic and living standard of China from year 2003 and 2008. The results show that three components can be extracted as 'income and expense capability', 'asset investment situation' and 'whole economical strength (poor or rich)' from these 14 indexes. Besides, all study areas in this study can be divided into three groups according to their living standard, which is related with development degree and geographical location.","PeriodicalId":383309,"journal":{"name":"2013 Fourth International Conference on Computing for Geospatial Research and Application","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114357881","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Bhattacharya, B. Czejdo, R. Malhotra, Nicolas Perez, R. Agrawal
Summary form only given. Geospatial data that exhibit time varying patterns are being captured faster than we are able to process them. We thus need machines to assist us in these tasks. One such problem is the automatic understanding of the behavior of moving objects for finding higher level information such as goals, intention etc. We propose a system that can solve one part of this complex task: automatic classification of movement patterns made by objects. In addition our system makes some simplifying assumptions: a) the object can be approximated as a moving point object (MPO) b) we consider interaction of a single MPO such as a car or mobile human, with static elements such as road networks and buildings e.g. airports, bus stops etc. on a terrain c) interactions between multiple MPOs are not considered. We use supervised machine learning algorithms to train the proposed system in classifying various patterns of spatiotemporal data. Algorithms such as Support Vector Machines and Decision Tree learning are trained with human labeled feature vectors that mathematically summarize how an MPO interacts with a landmark over time. Our feature vector incorporates a variety of geometric and temporal measurements such as the variable distances of the MPO to different points on the landmark, rate of change with time of variables such as distances and angles that are formed by the MPO with respect to the landmark. Simulated data created through graphical user interaction and agent-based modeling techniques are used to simulate MPO patterns over a representation of a real-world road network. The open source agent-based modeling tool Net Logo along with its GIS extension, and also the Agent Analyst module of ArcGIS are used to simulate large data sets. As future extensions, we are working on classification and prediction problems that involve multiple MPOs and landmarks.
{"title":"Characterization of Moving Point Objects in Geospatial Data","authors":"S. Bhattacharya, B. Czejdo, R. Malhotra, Nicolas Perez, R. Agrawal","doi":"10.1109/COMGEO.2013.33","DOIUrl":"https://doi.org/10.1109/COMGEO.2013.33","url":null,"abstract":"Summary form only given. Geospatial data that exhibit time varying patterns are being captured faster than we are able to process them. We thus need machines to assist us in these tasks. One such problem is the automatic understanding of the behavior of moving objects for finding higher level information such as goals, intention etc. We propose a system that can solve one part of this complex task: automatic classification of movement patterns made by objects. In addition our system makes some simplifying assumptions: a) the object can be approximated as a moving point object (MPO) b) we consider interaction of a single MPO such as a car or mobile human, with static elements such as road networks and buildings e.g. airports, bus stops etc. on a terrain c) interactions between multiple MPOs are not considered. We use supervised machine learning algorithms to train the proposed system in classifying various patterns of spatiotemporal data. Algorithms such as Support Vector Machines and Decision Tree learning are trained with human labeled feature vectors that mathematically summarize how an MPO interacts with a landmark over time. Our feature vector incorporates a variety of geometric and temporal measurements such as the variable distances of the MPO to different points on the landmark, rate of change with time of variables such as distances and angles that are formed by the MPO with respect to the landmark. Simulated data created through graphical user interaction and agent-based modeling techniques are used to simulate MPO patterns over a representation of a real-world road network. The open source agent-based modeling tool Net Logo along with its GIS extension, and also the Agent Analyst module of ArcGIS are used to simulate large data sets. As future extensions, we are working on classification and prediction problems that involve multiple MPOs and landmarks.","PeriodicalId":383309,"journal":{"name":"2013 Fourth International Conference on Computing for Geospatial Research and Application","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134393500","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Knowing the geolocation of a router can help to predict the geolocation of an Internet user, which is important for local advertising, fraud detection, and geo-fencing applications. For example, the geolocation of the last router on the path to a user is a reasonable guess for the user's geolocation. Current methods for geolocating a router are based on parsing a router's name to find geographic hints. Unfortunately, these methods are noisy and often provide no hints. This paper presents results on using machine learning methods to "sharpen" a router's noisy location based on the time delay between one or more routers and a target router or end user IP address. The novelty of this approach is that geolocation of the one or more routers is not required to be known.
{"title":"Mapping the Internet: Geolocating Routers by Using Machine Learning","authors":"A. Prieditis, Gang Chen","doi":"10.1109/COMGEO.2013.17","DOIUrl":"https://doi.org/10.1109/COMGEO.2013.17","url":null,"abstract":"Knowing the geolocation of a router can help to predict the geolocation of an Internet user, which is important for local advertising, fraud detection, and geo-fencing applications. For example, the geolocation of the last router on the path to a user is a reasonable guess for the user's geolocation. Current methods for geolocating a router are based on parsing a router's name to find geographic hints. Unfortunately, these methods are noisy and often provide no hints. This paper presents results on using machine learning methods to \"sharpen\" a router's noisy location based on the time delay between one or more routers and a target router or end user IP address. The novelty of this approach is that geolocation of the one or more routers is not required to be known.","PeriodicalId":383309,"journal":{"name":"2013 Fourth International Conference on Computing for Geospatial Research and Application","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133342874","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Customizable tools that extend the functionality and enhance existing features within a software system are the keys to continued innovation. Depending on complexity, current and proposed projects tend to push the limits of existing functionality and require new tools to perform unique processes. Fortunately, software engineers and designers have taken this paradigm to heart and have created software systems with extensible architectures and frameworks. This paper presents one such customization for Esri ArcGIS that addresses the unique concerns and requirements of an ongoing project at the Center for Geospatial Information Technology involved with geo locating police reported vehicle crashes in the Commonwealth of Virginia. The tool takes advantage of theories and concepts from both computer science and geographic information systems to assist geocoders with evaluating, locating, and attributing crash data. Additionally, the tool provides a centralized web-based administrative portal for project managers.
{"title":"CARL: Crash Attribute and Reference Locator","authors":"Kyle Schutt, Joseph Newman, K. Hancock, P. Sforza","doi":"10.1109/COMGEO.2013.20","DOIUrl":"https://doi.org/10.1109/COMGEO.2013.20","url":null,"abstract":"Customizable tools that extend the functionality and enhance existing features within a software system are the keys to continued innovation. Depending on complexity, current and proposed projects tend to push the limits of existing functionality and require new tools to perform unique processes. Fortunately, software engineers and designers have taken this paradigm to heart and have created software systems with extensible architectures and frameworks. This paper presents one such customization for Esri ArcGIS that addresses the unique concerns and requirements of an ongoing project at the Center for Geospatial Information Technology involved with geo locating police reported vehicle crashes in the Commonwealth of Virginia. The tool takes advantage of theories and concepts from both computer science and geographic information systems to assist geocoders with evaluating, locating, and attributing crash data. Additionally, the tool provides a centralized web-based administrative portal for project managers.","PeriodicalId":383309,"journal":{"name":"2013 Fourth International Conference on Computing for Geospatial Research and Application","volume":"145 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121418129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Many parameter values needed for creating high fidelity 3D models of components above and below the terrain of a region may not be explicitly present in the GIS source data gathered for that region, but may be implicit in the combined knowledge in these multiple types of data sets. Hence considerable effort from GIS experts is often involved in the creation of high fidelity 3D models. In this paper, we propose a Data Extractors framework which fuses data from shape file, elevation, and imagery datasets and automatically derives specific parameter values needed for creating 3D models in the region of interest. The goal is to produce a virtual area in more detail with lower turnaround time than the state of the art in geospecific region modeling. We demonstrate the application of our framework for creating detailed models of bridge structures using data typically available from GIS datasets.
{"title":"Generating Bridge Structure Model Details by Fusing GIS Source Data Using Semantic Web Technology","authors":"P. Eid, S. Mudur","doi":"10.1109/COMGEO.2013.7","DOIUrl":"https://doi.org/10.1109/COMGEO.2013.7","url":null,"abstract":"Many parameter values needed for creating high fidelity 3D models of components above and below the terrain of a region may not be explicitly present in the GIS source data gathered for that region, but may be implicit in the combined knowledge in these multiple types of data sets. Hence considerable effort from GIS experts is often involved in the creation of high fidelity 3D models. In this paper, we propose a Data Extractors framework which fuses data from shape file, elevation, and imagery datasets and automatically derives specific parameter values needed for creating 3D models in the region of interest. The goal is to produce a virtual area in more detail with lower turnaround time than the state of the art in geospecific region modeling. We demonstrate the application of our framework for creating detailed models of bridge structures using data typically available from GIS datasets.","PeriodicalId":383309,"journal":{"name":"2013 Fourth International Conference on Computing for Geospatial Research and Application","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122147977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}