Pedestrian trajectory data, which can be used to mine pedestrian motion patterns or to model pedestrian dynamics, is crucial for indoor location-based service studies and applications. However, researchers are faced with the challenges of data shortage and privacy restrictions when using pedestrian trajectory data. We present an Indoor Pedestrian Trajectory Generator (IPTG), which is a novel deep learning model to synthesize pedestrian trajectory data. IPTG first produces feature sequences that encode the spatial–temporal and semantic features of the walking process and then interpolates them into complete trajectories using A* and perturbation algorithms. IPTG has specially designed loss functions that preserve topological constraints and semantic characteristics. Incorporating the prior knowledge of environment constraints and pedestrian walking patterns, the IPTG model is capable of generating topologically and logically sound indoor pedestrian trajectories. We evaluated the synthesized trajectories based on multiple metrics and examined the generated trajectories qualitatively. The results show that IPTG outperforms several baselines, demonstrating its ability to generate semantically meaningful and spatiotemporally coherent trajectories.
{"title":"A deep pedestrian trajectory generator for complex indoor environments","authors":"Zhenxuan He, Tong Zhang, Wangshu Wang, Jing Li","doi":"10.1111/tgis.13143","DOIUrl":"https://doi.org/10.1111/tgis.13143","url":null,"abstract":"Pedestrian trajectory data, which can be used to mine pedestrian motion patterns or to model pedestrian dynamics, is crucial for indoor location-based service studies and applications. However, researchers are faced with the challenges of data shortage and privacy restrictions when using pedestrian trajectory data. We present an <i>Indoor Pedestrian Trajectory Generator</i> (IPTG), which is a novel deep learning model to synthesize pedestrian trajectory data. IPTG first produces feature sequences that encode the spatial–temporal and semantic features of the walking process and then interpolates them into complete trajectories using A* and perturbation algorithms. IPTG has specially designed loss functions that preserve topological constraints and semantic characteristics. Incorporating the prior knowledge of environment constraints and pedestrian walking patterns, the IPTG model is capable of generating topologically and logically sound indoor pedestrian trajectories. We evaluated the synthesized trajectories based on multiple metrics and examined the generated trajectories qualitatively. The results show that IPTG outperforms several baselines, demonstrating its ability to generate semantically meaningful and spatiotemporally coherent trajectories.","PeriodicalId":47842,"journal":{"name":"Transactions in GIS","volume":"34 1","pages":""},"PeriodicalIF":2.4,"publicationDate":"2024-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139771713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Reza Mohammadi, Mohammad Taleai, Philipp Otto, Monika Sester
Contemporary spatial statistics studies often underestimate the complexity of road networks, thereby inhibiting the strategic development of effective interventions for car accidents. In response to this limitation, the primary objective of this study is to enhance the spatiotemporal analysis of urban crash data. We introduce an innovative spatial-temporal weight matrix (STWM) for this purpose. The STWM integrates external covariates, including road network topological measurements and economic variables, offering a more comprehensive view of the spatiotemporal dependence of road accidents. To evaluate the functionality of the presented STWM, random effect eigenvector spatial filtering analysis is employed on Boston's traffic accident data from January to March 2016. The STWM improves analysis, surpassing distance-based SWM with a lower residual standard error of 0.209 and a higher adjusted R2 of 0.417. Furthermore, the study emphasizes the influence of road length on crash incidents, spatially and temporally, with random standard errors of 0.002 for spatial effects and 0.026 for non-spatial effects. This is particularly evident in the north and center of the study area during specific periods. This information can help decision-makers develop more effective urban development models and reduce future crash risks.
{"title":"Analyzing urban crash incidents: An advanced endogenous approach using spatiotemporal weights matrix","authors":"Reza Mohammadi, Mohammad Taleai, Philipp Otto, Monika Sester","doi":"10.1111/tgis.13138","DOIUrl":"https://doi.org/10.1111/tgis.13138","url":null,"abstract":"Contemporary spatial statistics studies often underestimate the complexity of road networks, thereby inhibiting the strategic development of effective interventions for car accidents. In response to this limitation, the primary objective of this study is to enhance the spatiotemporal analysis of urban crash data. We introduce an innovative spatial-temporal weight matrix (STWM) for this purpose. The STWM integrates external covariates, including road network topological measurements and economic variables, offering a more comprehensive view of the spatiotemporal dependence of road accidents. To evaluate the functionality of the presented STWM, random effect eigenvector spatial filtering analysis is employed on Boston's traffic accident data from January to March 2016. The STWM improves analysis, surpassing distance-based SWM with a lower residual standard error of 0.209 and a higher adjusted <i>R</i><sup>2</sup> of 0.417. Furthermore, the study emphasizes the influence of road length on crash incidents, spatially and temporally, with random standard errors of 0.002 for spatial effects and 0.026 for non-spatial effects. This is particularly evident in the north and center of the study area during specific periods. This information can help decision-makers develop more effective urban development models and reduce future crash risks.","PeriodicalId":47842,"journal":{"name":"Transactions in GIS","volume":"9 1","pages":""},"PeriodicalIF":2.4,"publicationDate":"2024-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139771708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The reliability of hourly PM2.5 data obtained from air quality monitoring stations is compromised as a result of the missing values, thereby impeding the thorough examination of crucial information. In this paper, we present a spatiotemporal (ST) stacking machine learning (ML) method with daily-cycle restrictions for reconstructing missing hourly PM2.5 records. First, the ST neighbors for the target station with missing values are selected at a daily scale. Subsequently, the non-null data within the ST neighbors undergo an iterative P-BSHADE interpolation process for re-interpolation. Next, a stacking ML model is constructed using the re-interpolation values and several environmental factors associated with PM2.5 as the predictors, while the observed PM2.5 is taken as the independent variable. Finally, the missing values are reconstructed by inputting the predictors into the trained stacking model. The study utilized hourly PM2.5 data in the Beijing-Tianjin-Hebei region as a case study to assess the effectiveness of the proposed method, using daily missing ratios of 10%, 30%, and 50%, respectively. The accuracy of the proposed method was then compared to four contemporary ST interpolation methods. The results indicate that the proposed method exhibits superior performance compared to the classical methods. Specifically, it achieves a reduction in the average root mean square error and mean absolute error by at least 40.6% and 40.1%, respectively. Additionally, the proposed method demonstrates the successful recovery of extreme values in the hourly PM2.5 records, in contrast to the classical methods which often exhibit a tendency to overestimate low values and underestimate high values. Overall, the proposed method presents a viable and efficient approach to recover missing values in the hourly PM2.5 records that demonstrate evident daily periodic patterns.
从空气质量监测站获得的每小时 PM2.5 数据由于存在缺失值,其可靠性大打折扣,从而阻碍了对关键信息的全面研究。本文提出了一种具有日周期限制的时空(ST)堆叠机器学习(ML)方法,用于重建缺失的 PM2.5 小时记录。首先,以日为尺度选择有缺失值的目标站的 ST 邻居。随后,对 ST 邻域内的非空数据进行迭代 P-BSHADE 插值,以重新插值。然后,使用重新插值和与 PM2.5 相关的几个环境因素作为预测因子,同时将观测到的 PM2.5 作为自变量,构建堆叠 ML 模型。最后,通过将预测值输入训练有素的堆叠模型来重建缺失值。研究利用京津冀地区每小时的 PM2.5 数据作为案例,分别使用 10%、30% 和 50%的日缺失率来评估建议方法的有效性。然后,将所提方法的准确性与四种当代 ST 插值方法进行了比较。结果表明,与传统方法相比,建议的方法表现出更优越的性能。具体来说,它将平均均方根误差和平均绝对误差分别降低了至少 40.6% 和 40.1%。此外,提议的方法成功地恢复了每小时 PM2.5 记录中的极端值,而传统方法往往表现出高估低值和低估高值的倾向。总之,建议的方法是恢复 PM2.5 小时记录中缺失值的一种可行而有效的方法,这些记录显示出明显的日周期模式。
{"title":"Spatiotemporal stacking method with daily-cycle restrictions for reconstructing missing hourly PM2.5 records","authors":"Chuanfa Chen, Kunyu Li","doi":"10.1111/tgis.13141","DOIUrl":"https://doi.org/10.1111/tgis.13141","url":null,"abstract":"The reliability of hourly PM2.5 data obtained from air quality monitoring stations is compromised as a result of the missing values, thereby impeding the thorough examination of crucial information. In this paper, we present a spatiotemporal (ST) stacking machine learning (ML) method with daily-cycle restrictions for reconstructing missing hourly PM2.5 records. First, the ST neighbors for the target station with missing values are selected at a daily scale. Subsequently, the non-null data within the ST neighbors undergo an iterative P-BSHADE interpolation process for re-interpolation. Next, a stacking ML model is constructed using the re-interpolation values and several environmental factors associated with PM2.5 as the predictors, while the observed PM2.5 is taken as the independent variable. Finally, the missing values are reconstructed by inputting the predictors into the trained stacking model. The study utilized hourly PM2.5 data in the Beijing-Tianjin-Hebei region as a case study to assess the effectiveness of the proposed method, using daily missing ratios of 10%, 30%, and 50%, respectively. The accuracy of the proposed method was then compared to four contemporary ST interpolation methods. The results indicate that the proposed method exhibits superior performance compared to the classical methods. Specifically, it achieves a reduction in the average root mean square error and mean absolute error by at least 40.6% and 40.1%, respectively. Additionally, the proposed method demonstrates the successful recovery of extreme values in the hourly PM2.5 records, in contrast to the classical methods which often exhibit a tendency to overestimate low values and underestimate high values. Overall, the proposed method presents a viable and efficient approach to recover missing values in the hourly PM2.5 records that demonstrate evident daily periodic patterns.","PeriodicalId":47842,"journal":{"name":"Transactions in GIS","volume":"37 1","pages":""},"PeriodicalIF":2.4,"publicationDate":"2024-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139771676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ramón Molinero-Parejo, Francisco Aguilera-Benavente, Montserrat Gómez-Delgado
Descriptive scenarios about the possible evolution of land use in our cities are essential instruments in urban planning. Although the simulation of these scenarios has enormous potential, further characterization is needed in order to be able to evaluate and compare them so as to provide more effective support for public policy. One of the most commonly used tools for assessing these scenarios is spatial moving-window metrics, a useful mechanism for extracting accurate information from simulated land-use maps on urban diversity and urban growth patterns. This article seeks to explore this question further and has two main aims. First, to develop and implement vSHEI and vLEI, two multiscale composition and configuration vector moving-window metrics for calculating urban diversity and urban growth patterns. Second, to test these metrics using the spatially explicit simulation of three prospective scenarios in the Henares Corridor (Spain), comparing the results and analyzing how well the scenario narratives match their spatial configuration, as measured using vSHEI and vLEI. Via the implementation of vSHEI and vLEI, we obtained urban diversity and urban expansion values at a local level, offering more precise and more realistic, mappable information on the composition and configuration of urban land use than that provided by raster metrics or by vector Patch-Matrix model metrics. We also used these metrics to test whether the simulated scenarios matched their description in the narrative storylines. Our results demonstrate the potential of vector moving-window metrics for characterizing the urban patterns that might develop under different scenarios at the parcel level.
{"title":"Adapting moving-window metrics to vector datasets for the characterization and comparison of simulated urban scenarios","authors":"Ramón Molinero-Parejo, Francisco Aguilera-Benavente, Montserrat Gómez-Delgado","doi":"10.1111/tgis.13139","DOIUrl":"https://doi.org/10.1111/tgis.13139","url":null,"abstract":"Descriptive scenarios about the possible evolution of land use in our cities are essential instruments in urban planning. Although the simulation of these scenarios has enormous potential, further characterization is needed in order to be able to evaluate and compare them so as to provide more effective support for public policy. One of the most commonly used tools for assessing these scenarios is spatial moving-window metrics, a useful mechanism for extracting accurate information from simulated land-use maps on urban diversity and urban growth patterns. This article seeks to explore this question further and has two main aims. First, to develop and implement vSHEI and vLEI, two multiscale composition and configuration vector moving-window metrics for calculating urban diversity and urban growth patterns. Second, to test these metrics using the spatially explicit simulation of three prospective scenarios in the Henares Corridor (Spain), comparing the results and analyzing how well the scenario narratives match their spatial configuration, as measured using vSHEI and vLEI. Via the implementation of vSHEI and vLEI, we obtained urban diversity and urban expansion values at a local level, offering more precise and more realistic, mappable information on the composition and configuration of urban land use than that provided by raster metrics or by vector Patch-Matrix model metrics. We also used these metrics to test whether the simulated scenarios matched their description in the narrative storylines. Our results demonstrate the potential of vector moving-window metrics for characterizing the urban patterns that might develop under different scenarios at the parcel level.","PeriodicalId":47842,"journal":{"name":"Transactions in GIS","volume":"9 1","pages":""},"PeriodicalIF":2.4,"publicationDate":"2024-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139771711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hotspot detection from geo-referenced urban data is critical for smart city research, such as traffic management and policy making. However, the classical clustering or classification approach for hotspot detection mainly aims at identifying “hotspot areas” rather than specific points, and the setting of global parameters such as search bandwidth can lead to inaccurate results when processing multi-density urban data. In this article, a data-driven adaptive hotspot detection (AHD) approach based on kernel density analysis is proposed and applied to various spatial objects. The adaptive search bandwidth is automatically calculated depending on the local density. Window detection is used to extract the specific hotspots in AHD, thus realizing a small-scale characterization of urban hotspots. Through the trajectory data of Harbin City taxis and New York City crime data, Geo-information Tupu is used to analyze the obtained specific hotspots and verify the effectiveness of AHD, providing new ideas for further research.
从地理参照城市数据中进行热点检测对于交通管理和政策制定等智慧城市研究至关重要。然而,用于热点检测的经典聚类或分类方法主要是为了识别 "热点区域 "而不是具体的点,而且在处理多密度城市数据时,搜索带宽等全局参数的设置会导致结果不准确。本文提出了一种基于核密度分析的数据驱动型自适应热点检测(AHD)方法,并将其应用于各种空间对象。自适应搜索带宽根据本地密度自动计算。在 AHD 中使用窗口检测来提取特定的热点,从而实现城市热点的小尺度特征描述。通过哈尔滨市出租车的轨迹数据和纽约市的犯罪数据,利用地理信息图谱对获得的特定热点进行分析,验证了 AHD 的有效性,为进一步研究提供了新思路。
{"title":"A data-driven adaptive geospatial hotspot detection approach in smart cities","authors":"Yuchen Yan, Wei Quan, Hua Wang","doi":"10.1111/tgis.13137","DOIUrl":"https://doi.org/10.1111/tgis.13137","url":null,"abstract":"Hotspot detection from geo-referenced urban data is critical for smart city research, such as traffic management and policy making. However, the classical clustering or classification approach for hotspot detection mainly aims at identifying “hotspot areas” rather than specific points, and the setting of global parameters such as search bandwidth can lead to inaccurate results when processing multi-density urban data. In this article, a data-driven adaptive hotspot detection (AHD) approach based on kernel density analysis is proposed and applied to various spatial objects. The adaptive search bandwidth is automatically calculated depending on the local density. Window detection is used to extract the specific hotspots in AHD, thus realizing a small-scale characterization of urban hotspots. Through the trajectory data of Harbin City taxis and New York City crime data, Geo-information Tupu is used to analyze the obtained specific hotspots and verify the effectiveness of AHD, providing new ideas for further research.","PeriodicalId":47842,"journal":{"name":"Transactions in GIS","volume":"2017 1","pages":""},"PeriodicalIF":2.4,"publicationDate":"2024-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139661710","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lei Zhou, Weiye Xiao, Han Li, Chen Wang, Xueqin Wang, Zhenlong Zheng
The job-housing relationship is a well-documented topic in urban and economic geography literature, but the disparities in job-housing relationships across workers' sociodemographic statuses have yet to be fully explored. This study utilizes a Baidu trajectory dataset and spatial analysis tools to examine job-housing relationships in Zhuhai, China, taking into account disparities in workers' socioeconomic status and job types. Origin–destination analysis indicates that job-housing relationships for commercial and public service sectors are balanced in the urban core, whereas, for the secondary sector, the relationship is more balanced in the suburban area compared to the central urban area. Network analysis further reveals the presence of self-contained communities for the secondary sector in peripheral areas. We find that high-income workers in the secondary sector experience longer commuting distances, in contrast to their counterparts in the commercial and public service sectors. These insights underscore the significance of considering workers' skills in urban and economic planning.
{"title":"Identify social and job disparities in the relationship between job-housing balance and urban commuting using Baidu trajectory big data","authors":"Lei Zhou, Weiye Xiao, Han Li, Chen Wang, Xueqin Wang, Zhenlong Zheng","doi":"10.1111/tgis.13135","DOIUrl":"https://doi.org/10.1111/tgis.13135","url":null,"abstract":"The job-housing relationship is a well-documented topic in urban and economic geography literature, but the disparities in job-housing relationships across workers' sociodemographic statuses have yet to be fully explored. This study utilizes a Baidu trajectory dataset and spatial analysis tools to examine job-housing relationships in Zhuhai, China, taking into account disparities in workers' socioeconomic status and job types. Origin–destination analysis indicates that job-housing relationships for commercial and public service sectors are balanced in the urban core, whereas, for the secondary sector, the relationship is more balanced in the suburban area compared to the central urban area. Network analysis further reveals the presence of self-contained communities for the secondary sector in peripheral areas. We find that high-income workers in the secondary sector experience longer commuting distances, in contrast to their counterparts in the commercial and public service sectors. These insights underscore the significance of considering workers' skills in urban and economic planning.","PeriodicalId":47842,"journal":{"name":"Transactions in GIS","volume":"110 1 1","pages":""},"PeriodicalIF":2.4,"publicationDate":"2024-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139501373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Deep learning (DL) algorithms have become increasingly popular in recent years for remote sensing applications, particularly in the field of change detection. DL has proven to be successful in automatically identifying changes in satellite images with varying resolutions. The integration of DL with remote sensing has not only facilitated the identification of global and regional changes but has also been a valuable resource for the scientific community. Researchers have developed numerous approaches for change detection, and the proposed work provides a summary of the most recent ones. Additionally, it introduces the common DL techniques used for detecting changes in satellite photos. The meta-analysis conducted in this article serves two purposes. Firstly, it tracks the evolution of change detection in DL investigations, highlighting the advancements made in this field. Secondly, it utilizes powerful DL-based change detection algorithms to determine the best strategy for monitoring changes at different resolutions. Furthermore, the proposed work thoroughly analyzes the performance of several DL approaches used for change detection. It discusses the strengths and limitations of these approaches, providing insights into their effectiveness and areas for improvement. The article also discusses future directions for DL-based change detection, emphasizing the need for further research and development in this area.
{"title":"Developments in deep learning for change detection in remote sensing: A review","authors":"Gaganpreet Kaur, Yasir Afaq","doi":"10.1111/tgis.13133","DOIUrl":"https://doi.org/10.1111/tgis.13133","url":null,"abstract":"Deep learning (DL) algorithms have become increasingly popular in recent years for remote sensing applications, particularly in the field of change detection. DL has proven to be successful in automatically identifying changes in satellite images with varying resolutions. The integration of DL with remote sensing has not only facilitated the identification of global and regional changes but has also been a valuable resource for the scientific community. Researchers have developed numerous approaches for change detection, and the proposed work provides a summary of the most recent ones. Additionally, it introduces the common DL techniques used for detecting changes in satellite photos. The meta-analysis conducted in this article serves two purposes. Firstly, it tracks the evolution of change detection in DL investigations, highlighting the advancements made in this field. Secondly, it utilizes powerful DL-based change detection algorithms to determine the best strategy for monitoring changes at different resolutions. Furthermore, the proposed work thoroughly analyzes the performance of several DL approaches used for change detection. It discusses the strengths and limitations of these approaches, providing insights into their effectiveness and areas for improvement. The article also discusses future directions for DL-based change detection, emphasizing the need for further research and development in this area.","PeriodicalId":47842,"journal":{"name":"Transactions in GIS","volume":"11 1","pages":""},"PeriodicalIF":2.4,"publicationDate":"2024-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139495375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Length measurements calculated from the geometry of vector geographic objects, called geometric measurements, are inherently imprecise. The imprecision of the measurements is due to the accumulation of causes of various origins, related to the production processes, and the rules of data representation. In order to reduce the overall imprecision of geometric length measurements, this article proposes to identify the causes of measurement error in the data, to model their respective impact, and finally to combine these different impacts. To do so, five causes of geometric measurement error have been modeled: map projection, terrain disregard, polygonal approximation of curves, digitizing error, and cartographic generalization. To estimate the overall measurement imprecision, three combination methods are proposed: selection of the maximum error, sum of the errors, and quadratic aggregation of the errors. An experiment conducted on a sample of roads represented at a medium scale demonstrates that quadratic error aggregation is the most effective combination method for reducing the imprecision of geometric length measurements.
{"title":"Combining error models to reduce the imprecision of geometric length measurement in vector databases","authors":"Jean-François Girres","doi":"10.1111/tgis.13132","DOIUrl":"https://doi.org/10.1111/tgis.13132","url":null,"abstract":"Length measurements calculated from the geometry of vector geographic objects, called geometric measurements, are inherently imprecise. The imprecision of the measurements is due to the accumulation of causes of various origins, related to the production processes, and the rules of data representation. In order to reduce the overall imprecision of geometric length measurements, this article proposes to identify the causes of measurement error in the data, to model their respective impact, and finally to combine these different impacts. To do so, five causes of geometric measurement error have been modeled: map projection, terrain disregard, polygonal approximation of curves, digitizing error, and cartographic generalization. To estimate the overall measurement imprecision, three combination methods are proposed: selection of the maximum error, sum of the errors, and quadratic aggregation of the errors. An experiment conducted on a sample of roads represented at a medium scale demonstrates that quadratic error aggregation is the most effective combination method for reducing the imprecision of geometric length measurements.","PeriodicalId":47842,"journal":{"name":"Transactions in GIS","volume":"5 1","pages":""},"PeriodicalIF":2.4,"publicationDate":"2024-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139475150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Christin Henzen, Arne Rümmler, Heiko Figgemeier, Michael Wagner, Lars Bernard, Ralph Müller-Pfefferkorn
Research data infrastructures are quickly evolving and show a wide variety, for instance in the way they address user requirements and use cases as well as how they provide user-required information through their software architecture. In this article, we discuss challenges and provide approaches to developing software architectures and software components for research data infrastructures in environmental sciences. Taking the GeoKur research project on harmonizing land use time series as the use case for curation and quality assurance of environmental research data, we designed, implemented, and tested approaches and software components with particular regard to data management planning as well as provenance and quality information management. We aim to illustrate how to better meet researchers’ needs and provide tightly interlinked software components.
{"title":"Research data infrastructures in environmental sciences—Challenges and implementation approaches focusing on the integration of software components","authors":"Christin Henzen, Arne Rümmler, Heiko Figgemeier, Michael Wagner, Lars Bernard, Ralph Müller-Pfefferkorn","doi":"10.1111/tgis.13131","DOIUrl":"https://doi.org/10.1111/tgis.13131","url":null,"abstract":"Research data infrastructures are quickly evolving and show a wide variety, for instance in the way they address user requirements and use cases as well as how they provide user-required information through their software architecture. In this article, we discuss challenges and provide approaches to developing software architectures and software components for research data infrastructures in environmental sciences. Taking the GeoKur research project on harmonizing land use time series as the use case for curation and quality assurance of environmental research data, we designed, implemented, and tested approaches and software components with particular regard to data management planning as well as provenance and quality information management. We aim to illustrate how to better meet researchers’ needs and provide tightly interlinked software components.","PeriodicalId":47842,"journal":{"name":"Transactions in GIS","volume":"40 1","pages":""},"PeriodicalIF":2.4,"publicationDate":"2024-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139458544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kai Xu, Songshan Yue, Qingbin Chen, Jin Wang, Fengyuan Zhang, Yijie Wang, Peilong Ma, Yongning Wen, Min Chen, Guonian Lü
Geoscientific models have rapidly developed in recent decades as effective tools to understand the past, perceive the present, and predict the future. However, with the increasing number of models available, discovering suitable ones and applying them properly in problem-solving situations has become more challenging. Existing materials describing geoscientific models (e.g., articles, manuals, handbooks, metadata documents, and web pages) are scattered and varied in structure and content. To help model users from different disciplinary backgrounds find, access, implement, and reuse models more conveniently, we propose an open knowledge framework for geoscientific models. The knowledge framework includes three levels: a resource level for indicating where to find a model, a connection level for indicating what materials are related to a model, and an application level for indicating how the model is applied. Through such a three-level framework, model users can collaboratively provide descriptive information for a model, link different materials to a model (e.g., data, references, and tools), and contribute experiences regarding model application in practical cases (as reusable solutions). Thus, a web-based community can be formed to facilitate the better use of geoscientific models. This article introduces the Open Geographic Modeling and Simulation System (OpenGMS) as the implementation of this open knowledge framework. Case studies are given to showcase the effectiveness and capability of the proposed framework.
{"title":"Construction of an open knowledge framework for geoscientific models","authors":"Kai Xu, Songshan Yue, Qingbin Chen, Jin Wang, Fengyuan Zhang, Yijie Wang, Peilong Ma, Yongning Wen, Min Chen, Guonian Lü","doi":"10.1111/tgis.13134","DOIUrl":"https://doi.org/10.1111/tgis.13134","url":null,"abstract":"Geoscientific models have rapidly developed in recent decades as effective tools to understand the past, perceive the present, and predict the future. However, with the increasing number of models available, discovering suitable ones and applying them properly in problem-solving situations has become more challenging. Existing materials describing geoscientific models (e.g., articles, manuals, handbooks, metadata documents, and web pages) are scattered and varied in structure and content. To help model users from different disciplinary backgrounds find, access, implement, and reuse models more conveniently, we propose an open knowledge framework for geoscientific models. The knowledge framework includes three levels: a resource level for indicating where to find a model, a connection level for indicating what materials are related to a model, and an application level for indicating how the model is applied. Through such a three-level framework, model users can collaboratively provide descriptive information for a model, link different materials to a model (e.g., data, references, and tools), and contribute experiences regarding model application in practical cases (as reusable solutions). Thus, a web-based community can be formed to facilitate the better use of geoscientific models. This article introduces the Open Geographic Modeling and Simulation System (OpenGMS) as the implementation of this open knowledge framework. Case studies are given to showcase the effectiveness and capability of the proposed framework.","PeriodicalId":47842,"journal":{"name":"Transactions in GIS","volume":"29 1","pages":""},"PeriodicalIF":2.4,"publicationDate":"2024-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139423521","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}