Abstract In view of the defects of total station and laser tracker in the construction of three-dimensional control network, a construction method of three-dimensional control network is proposed by combining total station and laser tracker. Total stations and laser trackers are utilized separately to observe the control points according to the free-stationing method. In data processing, the weights of different types of observations of total station and laser tracker are reasonably determined based on the Helmert variance component estimation. The high-precision horizontal direction observations of total station, which are better than 0 . 5 ″ 0.{5^{primeprime }}, and the ultra-high-precision ranging observations of laser tracker, which are better than 10 μm, are effectively combined in order to obtain more precise adjustment results. The experimental results show that the positional precision of the combined adjustment is better than that of the adjustment results of a single measurement system, and the precision of three-dimensional control network can be further improved by employing Helmert variance component estimation to determine the weight reasonably.
{"title":"Construction of precise three-dimensional engineering control network with total station and laser tracker","authors":"Yinggang Guo, Zongchun Li, Hao Yang","doi":"10.1515/jag-2021-0021","DOIUrl":"https://doi.org/10.1515/jag-2021-0021","url":null,"abstract":"Abstract In view of the defects of total station and laser tracker in the construction of three-dimensional control network, a construction method of three-dimensional control network is proposed by combining total station and laser tracker. Total stations and laser trackers are utilized separately to observe the control points according to the free-stationing method. In data processing, the weights of different types of observations of total station and laser tracker are reasonably determined based on the Helmert variance component estimation. The high-precision horizontal direction observations of total station, which are better than 0 . 5 ″ 0.{5^{primeprime }}, and the ultra-high-precision ranging observations of laser tracker, which are better than 10 μm, are effectively combined in order to obtain more precise adjustment results. The experimental results show that the positional precision of the combined adjustment is better than that of the adjustment results of a single measurement system, and the precision of three-dimensional control network can be further improved by employing Helmert variance component estimation to determine the weight reasonably.","PeriodicalId":45494,"journal":{"name":"Journal of Applied Geodesy","volume":"16 1","pages":"321 - 329"},"PeriodicalIF":1.4,"publicationDate":"2022-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42894455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Ensuring that ambiguity cycles are correctly fixed as integers is a critical prerequisite for ensuring the reliability of GNSS high-precision carrier positioning results. As a result, it is both theoretically and practically important to investigate the performance of the ambiguity validation test by selecting an appropriate threshold. To begin, two statistics are proposed in this paper to quantitatively describe the performance of the validation test, namely the true negative rate and the false positive rate, which are based on the percentage of Type I errors (discarded-truth) in the total number of failed tests and the percentage of Type II errors (false positive) in the total number of passed tests. Following that, this paper employs the false positive rate and the true negative rate as primary and secondary criteria for evaluating the performance of the R-ratio test, respectively, and develops simulation experiments to evaluate the performance of different thresholds under different ambiguity dimensions and data accuracy, and finally provides decisions for test threshold selection: (1) For ambiguities with 4 to 9 dimensions, a reference table for the selection of thresholds is given. (2) For ambiguities of 10 dimensions or more, the threshold value should be no less than 2.0 (where data with a mean value of more than 3.7 for the main diagonal elements of the variance matrix should not be fixed).
{"title":"R-ratio test threshold selection in GNSS integer ambiguity resolution","authors":"Yanze Wu, Xianwen Yu, Jiafu Wang","doi":"10.1515/jag-2022-0007","DOIUrl":"https://doi.org/10.1515/jag-2022-0007","url":null,"abstract":"Abstract Ensuring that ambiguity cycles are correctly fixed as integers is a critical prerequisite for ensuring the reliability of GNSS high-precision carrier positioning results. As a result, it is both theoretically and practically important to investigate the performance of the ambiguity validation test by selecting an appropriate threshold. To begin, two statistics are proposed in this paper to quantitatively describe the performance of the validation test, namely the true negative rate and the false positive rate, which are based on the percentage of Type I errors (discarded-truth) in the total number of failed tests and the percentage of Type II errors (false positive) in the total number of passed tests. Following that, this paper employs the false positive rate and the true negative rate as primary and secondary criteria for evaluating the performance of the R-ratio test, respectively, and develops simulation experiments to evaluate the performance of different thresholds under different ambiguity dimensions and data accuracy, and finally provides decisions for test threshold selection: (1) For ambiguities with 4 to 9 dimensions, a reference table for the selection of thresholds is given. (2) For ambiguities of 10 dimensions or more, the threshold value should be no less than 2.0 (where data with a mean value of more than 3.7 for the main diagonal elements of the variance matrix should not be fixed).","PeriodicalId":45494,"journal":{"name":"Journal of Applied Geodesy","volume":"16 1","pages":"313 - 320"},"PeriodicalIF":1.4,"publicationDate":"2022-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43231494","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract The Iraqi GNSS network was installed in 2005 with help from the USA and UK. The network consists of seven GNSS stations distributed across Iraq. The network GNSS data have been comprehensively analyzed in this study; this, in turn, allowed us to assess the impact of various geophysical phenomena (e. g., tectonic plate motion and Earthquakes) on its positional accuracy, stability, and validity over time. We processed daily GPS data, spanning over more than five years. The Earth Parameter and Orbit System software (EPOS.P8), developed by the German Geoscience Research Center (GFZ), was used for data processing by adopting the Precise Point Positioning (PPP) strategy. The stacked time series of stations coordinates was analyzed after estimating all modeled parameters of deterministic and stochastic parts using the least-squares technique. The study confirmed a slight impact of the recent M 7.3 Earthquake on the Iraqi GNSS stations and concluded that the stations are stable over the study period (2013 up to 2018) and that the GNSS stations represent the movement of the Arabian plate.
{"title":"Stability analysis of the Iraqi GNSS stations","authors":"Sattar Isawi, H. Schuh, Benjamin Männel, P. Sakic","doi":"10.1515/jag-2022-0001","DOIUrl":"https://doi.org/10.1515/jag-2022-0001","url":null,"abstract":"Abstract The Iraqi GNSS network was installed in 2005 with help from the USA and UK. The network consists of seven GNSS stations distributed across Iraq. The network GNSS data have been comprehensively analyzed in this study; this, in turn, allowed us to assess the impact of various geophysical phenomena (e. g., tectonic plate motion and Earthquakes) on its positional accuracy, stability, and validity over time. We processed daily GPS data, spanning over more than five years. The Earth Parameter and Orbit System software (EPOS.P8), developed by the German Geoscience Research Center (GFZ), was used for data processing by adopting the Precise Point Positioning (PPP) strategy. The stacked time series of stations coordinates was analyzed after estimating all modeled parameters of deterministic and stochastic parts using the least-squares technique. The study confirmed a slight impact of the recent M 7.3 Earthquake on the Iraqi GNSS stations and concluded that the stations are stable over the study period (2013 up to 2018) and that the GNSS stations represent the movement of the Arabian plate.","PeriodicalId":45494,"journal":{"name":"Journal of Applied Geodesy","volume":"16 1","pages":"299 - 312"},"PeriodicalIF":1.4,"publicationDate":"2022-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47865885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract This paper presents a method to overlay laser scan data from at least two different epochs using B-splines. Here, an approach is described in which the laser scan points are directly taken as control points for the B-spline curves or surfaces. In combination with a second part of the publication, the presented technique should be able to detect small deformations compared to the object size based on the curvature parameter. However, the method does not aim at detecting object movements, rotations or translations. Before the epoch-wise laser scan acquisition of an object, a sampling rate adjustment according to the object properties has to be performed. By means of a defining algorithm, control polygons with different numbers of points are defined from the point clouds, whose points are available in scanning order, however not in a uniform grid structure, but an irregular distributed form. As a basis for a bidirectional grid structure of tensor product B-spline surfaces a knot insertion algorithm (Boehm’s algorithm) is to be applied. While the previous process steps are implemented for both B-spline curves and surfaces, the following process steps are only available for B-spline curves so far. In a correlation analysis where the ranks of the curvatures of the B-spline curve points are used, one shift position of the curve can be generated in both epochs. This forms the foundation for the determination of identical curve points, which are iteratively improved to provide the basis for a Helmert transformation to epoch 0. The rotation and translation parameters calculated with the Helmert transformation are only used to map the B-spline curve of an epoch n to epoch 0. A determination of an object rotation is not possible, because the laser scans are not georeferenced. The transformation applied allows the representation of coordinate differences between the laser scan data of different epochs. An implementation of the process steps is available for B-spline curves so far and has to be completed for B-spline surfaces. The finalization of the implementation for B-spline surfaces will be done in a second part of the publication. The calculation process of the standard deviations of the laser scan data up to the standard deviations of the curve and surface points for detecting deformations by means of a significance test will also be listed in more detail in the second part of the publication.
{"title":"Studies on deformation analysis of TLS point clouds using B-splines – A control point based approach (Part I)","authors":"Julia Aichinger, V. Schwieger","doi":"10.1515/jag-2021-0065","DOIUrl":"https://doi.org/10.1515/jag-2021-0065","url":null,"abstract":"Abstract This paper presents a method to overlay laser scan data from at least two different epochs using B-splines. Here, an approach is described in which the laser scan points are directly taken as control points for the B-spline curves or surfaces. In combination with a second part of the publication, the presented technique should be able to detect small deformations compared to the object size based on the curvature parameter. However, the method does not aim at detecting object movements, rotations or translations. Before the epoch-wise laser scan acquisition of an object, a sampling rate adjustment according to the object properties has to be performed. By means of a defining algorithm, control polygons with different numbers of points are defined from the point clouds, whose points are available in scanning order, however not in a uniform grid structure, but an irregular distributed form. As a basis for a bidirectional grid structure of tensor product B-spline surfaces a knot insertion algorithm (Boehm’s algorithm) is to be applied. While the previous process steps are implemented for both B-spline curves and surfaces, the following process steps are only available for B-spline curves so far. In a correlation analysis where the ranks of the curvatures of the B-spline curve points are used, one shift position of the curve can be generated in both epochs. This forms the foundation for the determination of identical curve points, which are iteratively improved to provide the basis for a Helmert transformation to epoch 0. The rotation and translation parameters calculated with the Helmert transformation are only used to map the B-spline curve of an epoch n to epoch 0. A determination of an object rotation is not possible, because the laser scans are not georeferenced. The transformation applied allows the representation of coordinate differences between the laser scan data of different epochs. An implementation of the process steps is available for B-spline curves so far and has to be completed for B-spline surfaces. The finalization of the implementation for B-spline surfaces will be done in a second part of the publication. The calculation process of the standard deviations of the laser scan data up to the standard deviations of the curve and surface points for detecting deformations by means of a significance test will also be listed in more detail in the second part of the publication.","PeriodicalId":45494,"journal":{"name":"Journal of Applied Geodesy","volume":"16 1","pages":"279 - 298"},"PeriodicalIF":1.4,"publicationDate":"2022-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44813396","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tarek Hassan, T. Fath-Allah, M. Elhabiby, Alaa ElDin Awad, M. El‐Tokhey
Abstract Pedestrian and vehicular navigation relies mainly on Global Navigation Satellite System (GNSS). Even if different navigation systems are integrated, GNSS positioning remains the core of any navigation process as it is the only system capable of providing independent solutions. However, in harsh environments, especially urban ones, GNSS signals are confronted by many obstructions causing the satellite signals to reach the receivers through reflected paths. These No-Line of Sight (NLOS) signals can affect the positioning accuracy significantly. This contribution proposes a new algorithm to detect and exclude these NLOS signals using 3D building models constructed from Volunteered Geographic Information (VGI). OpenStreetMap (OSM) and Google Earth (GE) data are combined to build the 3D models incorporated with GNSS signals in the algorithm. Real field data are used for testing and validation of the presented algorithm and strategy. The accuracy improvement, after exclusion of the NLOS signals, is evaluated employing phase-smoothed code observations. The results show that applying the proposed algorithm can improve the horizontal positioning accuracy remarkably. This improvement reaches 10.72 m, and the Root Mean Square Error (RMSE) drops by 1.64 m (46 % improvement) throughout the epochs with detected NLOS satellites. In addition, the improvement is analyzed in the Along-Track (AT) and Cross-Track (CT) directions. It reaches 6.89 m in the AT direction with a drop of 1.076 m in the RMSE value, while it reaches 8.64 m with a drop of 1.239 m in the RMSE value in the CT direction.
行人和车辆导航主要依赖于全球导航卫星系统(GNSS)。即使不同的导航系统集成,GNSS定位仍然是任何导航过程的核心,因为它是唯一能够提供独立解决方案的系统。然而,在恶劣的环境中,特别是城市环境中,GNSS信号会遇到许多障碍物,导致卫星信号通过反射路径到达接收机。这些无瞄准线(NLOS)信号会显著影响定位精度。本文提出了一种新的算法来检测和排除这些NLOS信号,该算法使用由志愿地理信息(VGI)构建的3D建筑模型。该算法结合OpenStreetMap (OSM)和谷歌Earth (GE)数据,构建了包含GNSS信号的三维模型。实际现场数据用于测试和验证所提出的算法和策略。在排除NLOS信号后,采用相位平滑的代码观测来评估精度的提高。结果表明,采用该算法可显著提高水平定位精度。在NLOS卫星被探测的各个时期,这一改进达到了10.72 m,均方根误差(RMSE)降低了1.64 m(改善了46%)。此外,还分析了沿轨(AT)和跨轨(CT)方向的改进。AT方向达到6.89 m, RMSE值下降1.076 m; CT方向达到8.64 m, RMSE值下降1.239 m。
{"title":"Integration of GNSS observations with volunteered geographic information for improved navigation performance","authors":"Tarek Hassan, T. Fath-Allah, M. Elhabiby, Alaa ElDin Awad, M. El‐Tokhey","doi":"10.1515/jag-2021-0063","DOIUrl":"https://doi.org/10.1515/jag-2021-0063","url":null,"abstract":"Abstract Pedestrian and vehicular navigation relies mainly on Global Navigation Satellite System (GNSS). Even if different navigation systems are integrated, GNSS positioning remains the core of any navigation process as it is the only system capable of providing independent solutions. However, in harsh environments, especially urban ones, GNSS signals are confronted by many obstructions causing the satellite signals to reach the receivers through reflected paths. These No-Line of Sight (NLOS) signals can affect the positioning accuracy significantly. This contribution proposes a new algorithm to detect and exclude these NLOS signals using 3D building models constructed from Volunteered Geographic Information (VGI). OpenStreetMap (OSM) and Google Earth (GE) data are combined to build the 3D models incorporated with GNSS signals in the algorithm. Real field data are used for testing and validation of the presented algorithm and strategy. The accuracy improvement, after exclusion of the NLOS signals, is evaluated employing phase-smoothed code observations. The results show that applying the proposed algorithm can improve the horizontal positioning accuracy remarkably. This improvement reaches 10.72 m, and the Root Mean Square Error (RMSE) drops by 1.64 m (46 % improvement) throughout the epochs with detected NLOS satellites. In addition, the improvement is analyzed in the Along-Track (AT) and Cross-Track (CT) directions. It reaches 6.89 m in the AT direction with a drop of 1.076 m in the RMSE value, while it reaches 8.64 m with a drop of 1.239 m in the RMSE value in the CT direction.","PeriodicalId":45494,"journal":{"name":"Journal of Applied Geodesy","volume":"16 1","pages":"265 - 277"},"PeriodicalIF":1.4,"publicationDate":"2022-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46331138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract This paper analyses the regularization of an ill-conditioned mathematical model in a single-epoch precise GNSS positioning. The regularization parameter (RP) is selected as a parameter that minimizes the criterion of the Mean Squared Error (MSE) function. The crucial for RP estimation is to ensure stable initial least-squares (LS) estimates to replace the unknown quadratic matrix of actual values with the LS covariance matrix. For this purpose, two different data models are proposed, and two research scenarios are formed. Two regularized LS estimations are tested against the non-regularized LS approach. The first one is the classic regularization of LS estimation. In turn, the second one is its iterative counterpart. For the LS estimator of iterative regularization, regularized bias is significantly lower while the overall accuracy is improved in the sense of MSE. The regularized variance-covariance matrix of better precision can mitigate the impact of regularized bias on integer least-squares (ILS) estimation up to some extent. Therefore, iterative LS regularization is well-designed for single-epoch integer ambiguity resolution (AR). Nevertheless, the performance of the ILS estimator is studied in the context of the probability of correct integer AR in the presence of regularized bias.
{"title":"Regularizing ill-posed problem of single-epoch precise GNSS positioning using an iterative procedure","authors":"Artur Fischer, S. Cellmer, K. Nowel","doi":"10.1515/jag-2021-0031","DOIUrl":"https://doi.org/10.1515/jag-2021-0031","url":null,"abstract":"Abstract This paper analyses the regularization of an ill-conditioned mathematical model in a single-epoch precise GNSS positioning. The regularization parameter (RP) is selected as a parameter that minimizes the criterion of the Mean Squared Error (MSE) function. The crucial for RP estimation is to ensure stable initial least-squares (LS) estimates to replace the unknown quadratic matrix of actual values with the LS covariance matrix. For this purpose, two different data models are proposed, and two research scenarios are formed. Two regularized LS estimations are tested against the non-regularized LS approach. The first one is the classic regularization of LS estimation. In turn, the second one is its iterative counterpart. For the LS estimator of iterative regularization, regularized bias is significantly lower while the overall accuracy is improved in the sense of MSE. The regularized variance-covariance matrix of better precision can mitigate the impact of regularized bias on integer least-squares (ILS) estimation up to some extent. Therefore, iterative LS regularization is well-designed for single-epoch integer ambiguity resolution (AR). Nevertheless, the performance of the ILS estimator is studied in the context of the probability of correct integer AR in the presence of regularized bias.","PeriodicalId":45494,"journal":{"name":"Journal of Applied Geodesy","volume":"16 1","pages":"247 - 264"},"PeriodicalIF":1.4,"publicationDate":"2022-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44253739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Solutions of geodetic problem is an important technical problem in geodesy. Based on the comprehensive analysis of the characteristics of Bessel’s formula and Vincenty’s formula, an improved algorithm is proposed. Firstly, arctangent formula is replaced by arccosine formula to calculate spherical surface angular distance. Secondly, Bessel’s formula is used to replace Vincenty’s formula in the singular region. Finally, the test formulas of geodetic line and azimuth are given. According to the test formula, four points are selected for the test, which are located in the equator, low latitude, middle latitude and high latitude. The geodesic length range of the test is 1000–18000 km and the azimuth range is 0 ∘ 0{hspace{0.1667em}^{circ }}– 360 ∘ 360{hspace{0.1667em}^{circ }}. The experimental results show that the improved algorithm can solve the problem of singularity and quadrant judgment well. The solution precision of geodetic line is better than 1 mm, and the precision of azimuth is less than 0.0001 second.
{"title":"Inverse geodetic problem for long distance based on improved Vincenty’s formula","authors":"Jianqiang Wang, Yunlong Sun, Yang Dai","doi":"10.1515/jag-2021-0057","DOIUrl":"https://doi.org/10.1515/jag-2021-0057","url":null,"abstract":"Abstract Solutions of geodetic problem is an important technical problem in geodesy. Based on the comprehensive analysis of the characteristics of Bessel’s formula and Vincenty’s formula, an improved algorithm is proposed. Firstly, arctangent formula is replaced by arccosine formula to calculate spherical surface angular distance. Secondly, Bessel’s formula is used to replace Vincenty’s formula in the singular region. Finally, the test formulas of geodetic line and azimuth are given. According to the test formula, four points are selected for the test, which are located in the equator, low latitude, middle latitude and high latitude. The geodesic length range of the test is 1000–18000 km and the azimuth range is 0 ∘ 0{hspace{0.1667em}^{circ }}– 360 ∘ 360{hspace{0.1667em}^{circ }}. The experimental results show that the improved algorithm can solve the problem of singularity and quadrant judgment well. The solution precision of geodetic line is better than 1 mm, and the precision of azimuth is less than 0.0001 second.","PeriodicalId":45494,"journal":{"name":"Journal of Applied Geodesy","volume":"16 1","pages":"241 - 246"},"PeriodicalIF":1.4,"publicationDate":"2022-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48465476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract This manuscript explores the divergence of the Vertical Total Electron Content (VTEC) estimated from Global Navigation Satellite System (GNSS) measurements using global, regional, and International Reference Ionosphere (IRI) models over low to high latitude regions during various magnetic activity. The VTEC is estimated using a territorial network consisting of 7 GNSS stations in Egypt and 10 GNSS stations from the International GNSS Service (IGS) Global network. The impact of magnetic activity on VTEC is investigated. Due to the deficiency of IGS receivers in north Africa and the shortage of GNSS measurements, an extra high interpolation is done to cover the deficit of data over North Africa. A MATLAB code was created to produce VTEC maps for Egypt utilizing a territorial network contrasted with global maps of VTEC, which are delivered by the Center for Orbit Determination in Europe (CODE). Thus we can have genuine VTEC maps estimated from actual GNSS measurements over any region of North Africa. A Spherical Harmonics Expansion (SHE) equation was modelled using MATLAB and called Local VTEC Model (LVTECM) to estimate VTEC values using observations of dual-frequency GNSS receivers. The VTEC calculated from GNSS measurement using LVTECM is compared with CODE VTEC results and IRI-2016 VTEC model results. The analysis of outcomes demonstrates a good convergence between VTEC from CODE and estimated from LVTECM. A strong correlation between LVTECM and CODE reaches about 96 % and 92 % in high and low magnetic activity, respectively. The most extreme contrasts are found to be 2.5 TECu and 1.3 TECu at high and low magnetic activity, respectively. The maximum discrepancies between LVTECM and IRI-2016 are 9.7 TECu and 2.3 TECu at a high and low magnetic activity. Variation in VTEC due to magnetic activity ranges from 1–5 TECu in moderate magnetic activity. The estimated VTEC from the regional network shows a 95 % correlation between the estimated VTEC from LVTECM and CODE with a maximum difference of 5.9 TECu.
{"title":"Validation of regional and global ionosphere maps from GNSS measurements versus IRI2016 during different magnetic activity","authors":"A. Sedeek","doi":"10.1515/jag-2021-0046","DOIUrl":"https://doi.org/10.1515/jag-2021-0046","url":null,"abstract":"Abstract This manuscript explores the divergence of the Vertical Total Electron Content (VTEC) estimated from Global Navigation Satellite System (GNSS) measurements using global, regional, and International Reference Ionosphere (IRI) models over low to high latitude regions during various magnetic activity. The VTEC is estimated using a territorial network consisting of 7 GNSS stations in Egypt and 10 GNSS stations from the International GNSS Service (IGS) Global network. The impact of magnetic activity on VTEC is investigated. Due to the deficiency of IGS receivers in north Africa and the shortage of GNSS measurements, an extra high interpolation is done to cover the deficit of data over North Africa. A MATLAB code was created to produce VTEC maps for Egypt utilizing a territorial network contrasted with global maps of VTEC, which are delivered by the Center for Orbit Determination in Europe (CODE). Thus we can have genuine VTEC maps estimated from actual GNSS measurements over any region of North Africa. A Spherical Harmonics Expansion (SHE) equation was modelled using MATLAB and called Local VTEC Model (LVTECM) to estimate VTEC values using observations of dual-frequency GNSS receivers. The VTEC calculated from GNSS measurement using LVTECM is compared with CODE VTEC results and IRI-2016 VTEC model results. The analysis of outcomes demonstrates a good convergence between VTEC from CODE and estimated from LVTECM. A strong correlation between LVTECM and CODE reaches about 96 % and 92 % in high and low magnetic activity, respectively. The most extreme contrasts are found to be 2.5 TECu and 1.3 TECu at high and low magnetic activity, respectively. The maximum discrepancies between LVTECM and IRI-2016 are 9.7 TECu and 2.3 TECu at a high and low magnetic activity. Variation in VTEC due to magnetic activity ranges from 1–5 TECu in moderate magnetic activity. The estimated VTEC from the regional network shows a 95 % correlation between the estimated VTEC from LVTECM and CODE with a maximum difference of 5.9 TECu.","PeriodicalId":45494,"journal":{"name":"Journal of Applied Geodesy","volume":"16 1","pages":"229 - 240"},"PeriodicalIF":1.4,"publicationDate":"2022-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46759464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract The well-known to physical geodesists method of least-squares collocation and the geostatistical method of kriging probably known to the broader audience are compared. Both methods are rooted in Wiener–Kolmogorov’s (W–K) prediction theory; but, since necessity is the mother of invention, the W–K foundations have been extended to satisfy the needs of particular applications. The paper presents a link or rather an equivalence of the two methods as far as their basic forms are considered (specialization to geodetic boundary-value problems, covariance propagation between functionals and nonlinear geostatistical methods are excluded from this comparison). Only scalar random fields (univariate case) and the assumption of a second-order structure of a random function are considered. Due to the equivalence of their basic formulas, both techniques share the same advantages and disadvantages. The paper also shows the difference as to the predicted values and prediction variances in case of exact and filtered (noise reduction) prediction models. This theoretical comparison of the methods has practical implications because of readily available geostatistical software that, in local as well as global applications, can be used for predictive problems occurring in geodesy and surveying.
{"title":"Comparison of kriging and least-squares collocation – Revisited","authors":"M. Ligas","doi":"10.1515/jag-2021-0032","DOIUrl":"https://doi.org/10.1515/jag-2021-0032","url":null,"abstract":"Abstract The well-known to physical geodesists method of least-squares collocation and the geostatistical method of kriging probably known to the broader audience are compared. Both methods are rooted in Wiener–Kolmogorov’s (W–K) prediction theory; but, since necessity is the mother of invention, the W–K foundations have been extended to satisfy the needs of particular applications. The paper presents a link or rather an equivalence of the two methods as far as their basic forms are considered (specialization to geodetic boundary-value problems, covariance propagation between functionals and nonlinear geostatistical methods are excluded from this comparison). Only scalar random fields (univariate case) and the assumption of a second-order structure of a random function are considered. Due to the equivalence of their basic formulas, both techniques share the same advantages and disadvantages. The paper also shows the difference as to the predicted values and prediction variances in case of exact and filtered (noise reduction) prediction models. This theoretical comparison of the methods has practical implications because of readily available geostatistical software that, in local as well as global applications, can be used for predictive problems occurring in geodesy and surveying.","PeriodicalId":45494,"journal":{"name":"Journal of Applied Geodesy","volume":"16 1","pages":"217 - 227"},"PeriodicalIF":1.4,"publicationDate":"2022-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46005841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Although weighted total least-squares (WTLS) adjustment within the errors-in-variables (EIV) model is a rigorous method developed for parameter estimation, its exact solution is complicated since the matrix operations are extremely time-consuming in the whole repeated iteration process, especially when dealing with large data sets. This paper rewrites the EIV model to a similar Gauss–Markov model by taking the random error of the design matrix and observations into account, and reformulates it as an iterative weighted least-squares (IWLS) method without complicated theoretical derivation. IWLS approximates the “exact solution” of the general WTLS and provides a good balance between computational efficiency and estimation accuracy. Because weighted LS (WLS) method has a natural advantage in solving the EIV model, we also investigate whether WLS can directly replace IWLS and WTLS to implement the EIV model when the parameters in the EIV model are small. The results of numerical experiments confirmed that IWLS can obtain almost the same solution as the general WTLS solution of Jazaeri [21] and WLS can achieve the same accuracy as the general WTLS when the parameters are small.
{"title":"A simple iterative algorithm based on weighted least-squares for errors-in-variables models: Examples of coordinate transformations","authors":"Zhijun Kang","doi":"10.1515/jag-2021-0053","DOIUrl":"https://doi.org/10.1515/jag-2021-0053","url":null,"abstract":"Abstract Although weighted total least-squares (WTLS) adjustment within the errors-in-variables (EIV) model is a rigorous method developed for parameter estimation, its exact solution is complicated since the matrix operations are extremely time-consuming in the whole repeated iteration process, especially when dealing with large data sets. This paper rewrites the EIV model to a similar Gauss–Markov model by taking the random error of the design matrix and observations into account, and reformulates it as an iterative weighted least-squares (IWLS) method without complicated theoretical derivation. IWLS approximates the “exact solution” of the general WTLS and provides a good balance between computational efficiency and estimation accuracy. Because weighted LS (WLS) method has a natural advantage in solving the EIV model, we also investigate whether WLS can directly replace IWLS and WTLS to implement the EIV model when the parameters in the EIV model are small. The results of numerical experiments confirmed that IWLS can obtain almost the same solution as the general WTLS solution of Jazaeri [21] and WLS can achieve the same accuracy as the general WTLS when the parameters are small.","PeriodicalId":45494,"journal":{"name":"Journal of Applied Geodesy","volume":"16 1","pages":"203 - 215"},"PeriodicalIF":1.4,"publicationDate":"2022-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41646821","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}