Pub Date : 2024-06-20DOI: 10.1007/s00190-024-01866-x
Yingwen Zhao, Caijun Xu, Yangmao Wen
When conducting coseismic slip distribution inversion with interferometric synthetic aperture radar (InSAR) data, there is no universal method to objectively determine the appropriate size of InSAR data. Currently, little is also known about the computing efficiency of variance component estimation implemented in the inversion. Therefore, we develop a variance component adaptive estimation algorithm to determine the optimal sampling number of InSAR data for the slip distribution inversion. We derived more concise variation formulae than conventional simplified formulae for the variance component estimation. Based on multiple sampling data sets with different sampling numbers, the proposed algorithm determines the optimal sampling number by the changing behaviors of variance component estimates themselves. In three simulation cases, four evaluation indicators at low levels corresponding to the obtained optimal sampling number validate the feasibility and effectiveness of the proposed algorithm. Compared with the conventional slip distribution inversion strategy with the standard downsampling algorithm, the simulation cases and practical applications of five earthquakes suggest that the developed algorithm is more flexible and robust to yield appropriate size of InSAR data, thus provide a reasonable estimate of slip distribution. Computation time analyses indicate that the computational advantage of variation formulae is dependent of the ratio of the number of data to the number of fault patches and can be effectively suitable for cases with the ratio smaller than five, facilitating the rapid estimation of coseismic slip distribution inversion.
{"title":"Variance component adaptive estimation algorithm for coseismic slip distribution inversion using interferometric synthetic aperture radar data","authors":"Yingwen Zhao, Caijun Xu, Yangmao Wen","doi":"10.1007/s00190-024-01866-x","DOIUrl":"https://doi.org/10.1007/s00190-024-01866-x","url":null,"abstract":"<p>When conducting coseismic slip distribution inversion with interferometric synthetic aperture radar (InSAR) data, there is no universal method to objectively determine the appropriate size of InSAR data. Currently, little is also known about the computing efficiency of variance component estimation implemented in the inversion. Therefore, we develop a variance component adaptive estimation algorithm to determine the optimal sampling number of InSAR data for the slip distribution inversion. We derived more concise variation formulae than conventional simplified formulae for the variance component estimation. Based on multiple sampling data sets with different sampling numbers, the proposed algorithm determines the optimal sampling number by the changing behaviors of variance component estimates themselves. In three simulation cases, four evaluation indicators at low levels corresponding to the obtained optimal sampling number validate the feasibility and effectiveness of the proposed algorithm. Compared with the conventional slip distribution inversion strategy with the standard downsampling algorithm, the simulation cases and practical applications of five earthquakes suggest that the developed algorithm is more flexible and robust to yield appropriate size of InSAR data, thus provide a reasonable estimate of slip distribution. Computation time analyses indicate that the computational advantage of variation formulae is dependent of the ratio of the number of data to the number of fault patches and can be effectively suitable for cases with the ratio smaller than five, facilitating the rapid estimation of coseismic slip distribution inversion.</p>","PeriodicalId":54822,"journal":{"name":"Journal of Geodesy","volume":"46 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2024-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141436055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-20DOI: 10.1007/s00190-024-01867-w
Alessandro Bonforte, Salvatore Gambino, Rosanna Velardita, Laura Privitera
Electronic distance measurements (EDM) represent one of the first methods to detect ground deformation on volcanoes. Used since 1964, they enable acquiring precise distance measurements, whose time repetition may highlight changes related to volcanic activity. This technique was widely used on volcanoes from the 1970s to the early 2000s and has been used many times to model position, geometry, and volumes of magmatic and hydrothermal sources. This paper reports the EDM experiences, results and data acquired on Sicilian volcanoes (Etna, Vulcano, Stromboli and Pantelleria) from the early 1970s, which have played a major role in the birth of the volcano-geodesy for volcanic process knowledge, making the Sicilian volcanoes among those with the longest geodetic record in the world.
{"title":"Review of early ground deformation observations by electronic distance measurements (EDM) on active Sicilian volcanoes: valuable data and information for long-term analyses","authors":"Alessandro Bonforte, Salvatore Gambino, Rosanna Velardita, Laura Privitera","doi":"10.1007/s00190-024-01867-w","DOIUrl":"https://doi.org/10.1007/s00190-024-01867-w","url":null,"abstract":"<p>Electronic distance measurements (EDM) represent one of the first methods to detect ground deformation on volcanoes. Used since 1964, they enable acquiring precise distance measurements, whose time repetition may highlight changes related to volcanic activity. This technique was widely used on volcanoes from the 1970s to the early 2000s and has been used many times to model position, geometry, and volumes of magmatic and hydrothermal sources. This paper reports the EDM experiences, results and data acquired on Sicilian volcanoes (Etna, Vulcano, Stromboli and Pantelleria) from the early 1970s, which have played a major role in the birth of the volcano-geodesy for volcanic process knowledge, making the Sicilian volcanoes among those with the longest geodetic record in the world.</p>","PeriodicalId":54822,"journal":{"name":"Journal of Geodesy","volume":"26 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2024-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141430468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-20DOI: 10.1007/s00190-024-01864-z
Shengping He, Thomas Hobiger, Doris Becker
Troposphere’s asymmetry can introduce errors ranging from centimeters to decimeters at low elevation angles, which cannot be ignored in high-precision positioning technology and meteorological research. The traditional two-axis gradient model, which strongly relies on an open-sky environment of the receiver, exhibits misfits at low elevation angles due to their simplistic nature. In response, we propose a directional mapping function based on cyclic B-splines named B-spline mapping function (BMF). This model replaces the conventional approach, which is based on estimating Zenith Wet Delay and gradient parameters, by estimating only four parameters which enable a continuous characterization of the troposphere delay across any directions. A simulation test, based on a numerical weather model, was conducted to validate the superiority of cyclic B-spline functions in representing tropospheric asymmetry. Based on an extensive analysis, the performance of BMF was assessed within precise point positioning using data from 45 International GNSS Service stations across Europe and Africa. It is revealed that BMF improves the coordinate repeatability by approximately (10%) horizontally and about (5%) vertically. Such improvements are particularly pronounced under heavy rainfall conditions, where the improvement of 3-dimensional root mean square error reaches up to (13%).
对流层的不对称性会在低仰角时带来从厘米到分米不等的误差,这在高精度定位技术和气象研究中不容忽视。传统的两轴梯度模型主要依赖于接收器的开阔天空环境,由于其简单性,在低仰角时会出现误差。为此,我们提出了一种基于循环 B 样条的方向映射函数,命名为 B 样条映射函数(BMF)。该模型取代了传统的基于天顶湿延迟和梯度参数估计的方法,只需估计四个参数,就能连续描述对流层在任何方向上的延迟。基于数值天气模型进行了模拟测试,以验证循环 B-样条函数在表示对流层不对称方面的优越性。在广泛分析的基础上,利用欧洲和非洲 45 个国际全球导航卫星系统服务站的数据对 BMF 在精确点定位方面的性能进行了评估。结果表明,BMF 在水平方向上提高了坐标重复性约 10%,在垂直方向上提高了约 5%。在强降雨条件下,这种改进尤为明显,三维均方根误差的改进高达(13%)。
{"title":"The B-spline mapping function (BMF): representing anisotropic troposphere delays by a single self-consistent functional model","authors":"Shengping He, Thomas Hobiger, Doris Becker","doi":"10.1007/s00190-024-01864-z","DOIUrl":"https://doi.org/10.1007/s00190-024-01864-z","url":null,"abstract":"<p>Troposphere’s asymmetry can introduce errors ranging from centimeters to decimeters at low elevation angles, which cannot be ignored in high-precision positioning technology and meteorological research. The traditional two-axis gradient model, which strongly relies on an open-sky environment of the receiver, exhibits misfits at low elevation angles due to their simplistic nature. In response, we propose a directional mapping function based on cyclic B-splines named B-spline mapping function (BMF). This model replaces the conventional approach, which is based on estimating Zenith Wet Delay and gradient parameters, by estimating only four parameters which enable a continuous characterization of the troposphere delay across any directions. A simulation test, based on a numerical weather model, was conducted to validate the superiority of cyclic B-spline functions in representing tropospheric asymmetry. Based on an extensive analysis, the performance of BMF was assessed within precise point positioning using data from 45 International GNSS Service stations across Europe and Africa. It is revealed that BMF improves the coordinate repeatability by approximately <span>(10%)</span> horizontally and about <span>(5%)</span> vertically. Such improvements are particularly pronounced under heavy rainfall conditions, where the improvement of 3-dimensional root mean square error reaches up to <span>(13%)</span>.</p>","PeriodicalId":54822,"journal":{"name":"Journal of Geodesy","volume":"44 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2024-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141430396","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-19DOI: 10.1007/s00190-024-01872-z
Xue-Qing Xu, Ming Fang, Yong-Hong Zhou, Xin-Hao Liao
We reconstructed the Chandler Wobble (CW) from 1962 to 2022 by combining the Eigen-oscillator excited by geophysical fluids of atmospheric and oceanic angular momentums (AAM and OAM). The mass and motion terms for the AAM are further divided with respect to the land and ocean domains. Particular attention is placed on the time span from 2012 to 2022 in relation to the observable reduction in the amplitude of the CW. Our research indicates that the main contributor to the CW induced by AAM is the mass term (i.e., the pressure variations over land). Moreover, the phase of the AAM-induced CW remains relatively stable during the interval of 1962–2022. In contrast, the phase of the OAM-induced CW exhibits a periodic variation with a cycle of approximately 20 years. This cyclic variation would modulate the overall amplitude of the CW. The noticeable amplitude deduction over the past ten years can be attributed to the evolution of the CW driven by AAM and OAM, toward a state of cancellation. These findings further reveal that the variation in the phase difference between the CW forced by AAM and OAM, may be indicative of changes in the interaction between the solid Earth, atmosphere, and ocean.
{"title":"Continental and oceanic AAM contributions to Chandler Wobble with the amplitude attenuation from 2012 to 2022","authors":"Xue-Qing Xu, Ming Fang, Yong-Hong Zhou, Xin-Hao Liao","doi":"10.1007/s00190-024-01872-z","DOIUrl":"https://doi.org/10.1007/s00190-024-01872-z","url":null,"abstract":"<p>We reconstructed the Chandler Wobble (CW) from 1962 to 2022 by combining the Eigen-oscillator excited by geophysical fluids of atmospheric and oceanic angular momentums (AAM and OAM). The mass and motion terms for the AAM are further divided with respect to the land and ocean domains. Particular attention is placed on the time span from 2012 to 2022 in relation to the observable reduction in the amplitude of the CW. Our research indicates that the main contributor to the CW induced by AAM is the mass term (i.e., the pressure variations over land). Moreover, the phase of the AAM-induced CW remains relatively stable during the interval of 1962–2022. In contrast, the phase of the OAM-induced CW exhibits a periodic variation with a cycle of approximately 20 years. This cyclic variation would modulate the overall amplitude of the CW. The noticeable amplitude deduction over the past ten years can be attributed to the evolution of the CW driven by AAM and OAM, toward a state of cancellation. These findings further reveal that the variation in the phase difference between the CW forced by AAM and OAM, may be indicative of changes in the interaction between the solid Earth, atmosphere, and ocean.</p>","PeriodicalId":54822,"journal":{"name":"Journal of Geodesy","volume":"15 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2024-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141425412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-18DOI: 10.1007/s00190-024-01871-0
Lin Zhang, Yunzhong Shen, Qiujie Chen, Kunpu Ji
Removing stripe noise from the GRACE (Gravity Recovery and Climate Experiment) monthly gravity field model is crucial for accurately interpreting temporal gravity variations. The conventional parameter filtering (CPF) approach expresses the signal components with a harmonic model while neglecting non-periodic and interannual signals. To address this issue, we improve the CPF approach by incorporating those ignored signals using a first-order Gauss–Markov process. The improved parameter filtering (IPF) approach is used to filter the monthly spherical harmonic coefficients (SHCs) of the Tongji-Grace2018 model from April 2002 to December 2016. Compared to the CPF approach, the IPF approach exhibits stronger signals in low-degree SHCs (i.e., degrees below 20) and lower noise in high-order SHCs (i.e., orders above 40), alongside higher signal-to-noise ratios and better agreement with CSR mascon product and NOAH model in global and basin analysis. Across the 22 largest basins worldwide, the average Nash–Sutcliffe coefficients of latitude-weighted terrestrial water storage anomalies filtered by the IPF approach relative to those derived from CSR mascon product and NOAH model are 0.90 and 0.21, significantly higher than 0.17 and − 0.71, filtered by the CPF approach. Simulation experiments further demonstrate that the IPF approach yields the filtered results closest to the actual signals, reducing root-mean-square errors by 30.1%, 25.9%, 45.3%, 30.9%, 46.6%, 32.7%, 39.6%, and 38.2% over land, and 2.8%, 54.4%, 70.1%, 15.3%, 69.2%, 46.5%, 40.4%, and 23.6% over the ocean, compared to CPF, DDK3, least square, RMS, Gaussian 300, Fan 300, Gaussian 300 with P4M6, and Fan 300 with P4M6 filtering approaches, respectively
{"title":"An improved parameter filtering approach for processing GRACE gravity field models using first-order Gauss–Markov process","authors":"Lin Zhang, Yunzhong Shen, Qiujie Chen, Kunpu Ji","doi":"10.1007/s00190-024-01871-0","DOIUrl":"https://doi.org/10.1007/s00190-024-01871-0","url":null,"abstract":"<p>Removing stripe noise from the GRACE (Gravity Recovery and Climate Experiment) monthly gravity field model is crucial for accurately interpreting temporal gravity variations. The conventional parameter filtering (CPF) approach expresses the signal components with a harmonic model while neglecting non-periodic and interannual signals. To address this issue, we improve the CPF approach by incorporating those ignored signals using a first-order Gauss–Markov process. The improved parameter filtering (IPF) approach is used to filter the monthly spherical harmonic coefficients (SHCs) of the Tongji-Grace2018 model from April 2002 to December 2016. Compared to the CPF approach, the IPF approach exhibits stronger signals in low-degree SHCs (i.e., degrees below 20) and lower noise in high-order SHCs (i.e., orders above 40), alongside higher signal-to-noise ratios and better agreement with CSR mascon product and NOAH model in global and basin analysis. Across the 22 largest basins worldwide, the average Nash–Sutcliffe coefficients of latitude-weighted terrestrial water storage anomalies filtered by the IPF approach relative to those derived from CSR mascon product and NOAH model are 0.90 and 0.21, significantly higher than 0.17 and − 0.71, filtered by the CPF approach. Simulation experiments further demonstrate that the IPF approach yields the filtered results closest to the actual signals, reducing root-mean-square errors by 30.1%, 25.9%, 45.3%, 30.9%, 46.6%, 32.7%, 39.6%, and 38.2% over land, and 2.8%, 54.4%, 70.1%, 15.3%, 69.2%, 46.5%, 40.4%, and 23.6% over the ocean, compared to CPF, DDK3, least square, RMS, Gaussian 300, Fan 300, Gaussian 300 with P4M6, and Fan 300 with P4M6 filtering approaches, respectively</p>","PeriodicalId":54822,"journal":{"name":"Journal of Geodesy","volume":"2014 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2024-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141334130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-18DOI: 10.1007/s00190-024-01855-0
Yangkang Yu, Ling Yang, Yunzhong Shen
The issue of outliers has been a research focus in the field of geodesy. Based on a statistical testing method known as the w-test, data snooping along with its iterative form, iterative data snooping (IDS), is commonly used to diagnose outliers in linear models. However, in the case of multiple outliers, it may suffer from the masking and swamping effects, thereby limiting the detection and identification capabilities. This contribution is to investigate the cause of masking and swamping effects and propose a new method to mitigate these phenomena. First, based on the data division, an extended form of the w-test with its reliability measure is presented, and a theoretical reinterpretation of data snooping and IDS is provided. Then, to alleviate the effects of masking and swamping, a new outlier diagnostic method and its iterative form are proposed, namely data refining and iterative data refining (IDR). In general, if the total observations are initially divided into an inlying set and an outlying set, data snooping can be considered a process of selecting outliers from the inlying set to the outlying set. Conversely, data refining is then a reverse process to transfer inliers from the outlying set to the inlying one. Both theoretical analysis and practical examples show that IDR would keep stronger robustness than IDS due to the alleviation of masking and swamping effect, although it may pose a higher risk of precision loss when dealing with insufficient data.
异常值问题一直是大地测量领域的研究重点。基于一种称为 w 检验的统计检验方法,数据窥探及其迭代形式--迭代数据窥探(IDS)--通常用于诊断线性模型中的异常值。然而,在多个异常值的情况下,它可能会受到掩蔽和沼泽效应的影响,从而限制了检测和识别能力。本文旨在研究掩蔽效应和沼泽效应的原因,并提出一种新方法来缓解这些现象。首先,在数据划分的基础上,提出了 W 检验的扩展形式及其可靠性度量,并从理论上重新解释了数据窥探和 IDS。然后,为了减轻掩蔽和沼泽的影响,提出了一种新的离群值诊断方法及其迭代形式,即数据精炼和迭代数据精炼(IDR)。一般来说,如果最初将全部观测数据分为内含集和离群集,那么数据窥探可以被视为从内含集向离群集选择离群值的过程。反之,数据提炼则是一个将异常值从离群集转移到正常集的反向过程。理论分析和实际案例都表明,IDR 比 IDS 具有更强的鲁棒性,因为它减轻了掩蔽和沼泽效应,不过在处理数据不足时,它可能会带来更高的精度损失风险。
{"title":"An extended w-test for outlier diagnostics in linear models","authors":"Yangkang Yu, Ling Yang, Yunzhong Shen","doi":"10.1007/s00190-024-01855-0","DOIUrl":"https://doi.org/10.1007/s00190-024-01855-0","url":null,"abstract":"<p>The issue of outliers has been a research focus in the field of geodesy. Based on a statistical testing method known as the <i>w</i>-test, data snooping along with its iterative form, iterative data snooping (IDS), is commonly used to diagnose outliers in linear models. However, in the case of multiple outliers, it may suffer from the masking and swamping effects, thereby limiting the detection and identification capabilities. This contribution is to investigate the cause of masking and swamping effects and propose a new method to mitigate these phenomena. First, based on the data division, an extended form of the <i>w</i>-test with its reliability measure is presented, and a theoretical reinterpretation of data snooping and IDS is provided. Then, to alleviate the effects of masking and swamping, a new outlier diagnostic method and its iterative form are proposed, namely data refining and iterative data refining (IDR). In general, if the total observations are initially divided into an inlying set and an outlying set, data snooping can be considered a process of selecting outliers from the inlying set to the outlying set. Conversely, data refining is then a reverse process to transfer inliers from the outlying set to the inlying one. Both theoretical analysis and practical examples show that IDR would keep stronger robustness than IDS due to the alleviation of masking and swamping effect, although it may pose a higher risk of precision loss when dealing with insufficient data.</p>","PeriodicalId":54822,"journal":{"name":"Journal of Geodesy","volume":"13 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2024-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141334123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-13DOI: 10.1007/s00190-024-01844-3
Jian Ma, Ziqing Wei, Zhenhe Zhai, Xinxing Li
The Helmert’s second condensation method is usually used to condense the topographical masses outside the boundary surface in the determination of the geoid and quasi-geoid based on the boundary-value theory. The condensation of topographical masses produces direct and indirect topographical effects. Nowadays, the Remove-Compute-Restore (RCR) technique has been widely utilized in the boundary-value problems. In view of spectral consistency, high-frequency direct and indirect topographical effects should be used in the Hotine-Helmert/Stokes–Helmert integral when the Earth gravitational model serves as the reference model in determining the (quasi-) geoid. Thus, the algorithms for high-frequency topographical effects are investigated in this manuscript. First, the prism methods for near-zone direct and indirect topographical effects are derived to improve the accuracies of near-zone effects compared with the traditional surface integral methods. Second, the Molodenskii spectral methods truncated to power H4 are put forward for far-zone topographical effects. Next, the "prism + Molodenskii spectral-spherical harmonic" combined algorithms for high-frequency topographical effects are further presented. At last, the effectiveness of the combined algorithms for the high-frequency topographical effects are verified in a mountainous test area.
{"title":"Combined algorithms of high-frequency topographical effects for the boundary-value problems based on Helmert's second condensation method","authors":"Jian Ma, Ziqing Wei, Zhenhe Zhai, Xinxing Li","doi":"10.1007/s00190-024-01844-3","DOIUrl":"https://doi.org/10.1007/s00190-024-01844-3","url":null,"abstract":"<p>The Helmert’s second condensation method is usually used to condense the topographical masses outside the boundary surface in the determination of the geoid and quasi-geoid based on the boundary-value theory. The condensation of topographical masses produces direct and indirect topographical effects. Nowadays, the Remove-Compute-Restore (RCR) technique has been widely utilized in the boundary-value problems. In view of spectral consistency, high-frequency direct and indirect topographical effects should be used in the Hotine-Helmert/Stokes–Helmert integral when the Earth gravitational model serves as the reference model in determining the (quasi-) geoid. Thus, the algorithms for high-frequency topographical effects are investigated in this manuscript. First, the prism methods for near-zone direct and indirect topographical effects are derived to improve the accuracies of near-zone effects compared with the traditional surface integral methods. Second, the Molodenskii spectral methods truncated to power <i>H</i><sup>4</sup> are put forward for far-zone topographical effects. Next, the \"prism + Molodenskii spectral-spherical harmonic\" combined algorithms for high-frequency topographical effects are further presented. At last, the effectiveness of the combined algorithms for the high-frequency topographical effects are verified in a mountainous test area.</p>","PeriodicalId":54822,"journal":{"name":"Journal of Geodesy","volume":"6 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2024-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141315734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-06DOI: 10.1007/s00190-024-01869-8
Tomasz Kur, Krzysztof Sośnica, Maciej Kalarus
The European Space Agency (ESA) is preparing a satellite mission called GENESIS to be launched in 2027 as part of the FutureNAV program. GENESIS co-locates, for the first time, all four space geodetic techniques on one satellite platform. The main objectives of the mission are the realization of the International Terrestrial Reference Frames and the mitigation of biases in geodetic measurements; however, GENESIS will remarkably contribute to the determination of the geodetic parameters. The precise GENESIS orbits will be determined through satellite-to-satellite tracking, employing two GNSS antennas to observe GPS and Galileo satellites in both nadir and zenith directions. In this research, we show results from simulations of GENESIS and Galileo-like constellations with joint orbit and clock determination. We assess the orbit quality of GENESIS based on nadir-only, zenith-only, and combined nadir–zenith GNSS observations. The results prove that GENESIS and Galileo joint orbit and clock determination substantially improves Galileo orbits, satellite clocks, and even ground-based clocks of GNSS receivers tracking Galileo satellites. Although zenith and nadir GNSS antennas favor different orbital planes in terms of the number of collected observations, the mean results for each Galileo orbital plane are improved to a similar extent. The 3D orbit error of Galileo is improved from 27 mm (Galileo-only), 23 mm (Galileo + zenith), 16 mm (Galileo + nadir), to 14 mm (Galileo + zenith + nadir GENESIS observations), i.e., almost by a factor of two in the joint GENESIS + Galileo orbit and clock solutions.
{"title":"Prospects of GENESIS and Galileo joint orbit and clock determination","authors":"Tomasz Kur, Krzysztof Sośnica, Maciej Kalarus","doi":"10.1007/s00190-024-01869-8","DOIUrl":"https://doi.org/10.1007/s00190-024-01869-8","url":null,"abstract":"<p>The European Space Agency (ESA) is preparing a satellite mission called GENESIS to be launched in 2027 as part of the FutureNAV program. GENESIS co-locates, for the first time, all four space geodetic techniques on one satellite platform. The main objectives of the mission are the realization of the International Terrestrial Reference Frames and the mitigation of biases in geodetic measurements; however, GENESIS will remarkably contribute to the determination of the geodetic parameters. The precise GENESIS orbits will be determined through satellite-to-satellite tracking, employing two GNSS antennas to observe GPS and Galileo satellites in both nadir and zenith directions. In this research, we show results from simulations of GENESIS and Galileo-like constellations with joint orbit and clock determination. We assess the orbit quality of GENESIS based on nadir-only, zenith-only, and combined nadir–zenith GNSS observations. The results prove that GENESIS and Galileo joint orbit and clock determination substantially improves Galileo orbits, satellite clocks, and even ground-based clocks of GNSS receivers tracking Galileo satellites. Although zenith and nadir GNSS antennas favor different orbital planes in terms of the number of collected observations, the mean results for each Galileo orbital plane are improved to a similar extent. The 3D orbit error of Galileo is improved from 27 mm (Galileo-only), 23 mm (Galileo + zenith), 16 mm (Galileo + nadir), to 14 mm (Galileo + zenith + nadir GENESIS observations), i.e., almost by a factor of two in the joint GENESIS + Galileo orbit and clock solutions.</p>","PeriodicalId":54822,"journal":{"name":"Journal of Geodesy","volume":"313 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2024-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141264907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-06DOI: 10.1007/s00190-024-01858-x
Pavel Novák, Mehdi Eshagh, Martin Pitoňák
Physical geodesy applies potential theory to study the Earth’s gravitational field in space outside and up to a few km inside the Earth’s mass. Among various tools offered by this theory, boundary-value problems are particularly popular for the transformation or continuation of gravitational field parameters across space. Traditional problems, formulated and solved as early as in the nineteenth century, have been gradually supplemented with new problems, as new observational methods and data are available. In most cases, the emphasis is on formulating a functional relationship involving two functions in 3-D space; the values of one function are searched but unobservable; the values of the other function are observable but with errors. Such mathematical models (observation equations) are referred to as deterministic. Since observed data burdened with observational errors are used for their solutions, the relevant stochastic models must be formulated to provide uncertainties of the estimated parameters against which their quality can be evaluated. This article discusses the boundary-value problems of potential theory formulated for gravitational data currently or in the foreseeable future used by physical geodesy. Their solutions in the form of integral formulas and integral equations are reviewed, practical estimators applicable to numerical solutions of the deterministic models are formulated, and their related stochastic models are introduced. Deterministic and stochastic models represent a complete solution to problems in physical geodesy providing estimates of unknown parameters and their error variances (mean squared errors). On the other hand, analyses of error covariances can reveal problems related to the observed data and/or the design of the mathematical models. Numerical experiments demonstrate the applicability of stochastic models in practice.
{"title":"Uncertainties associated with integral-based solutions to geodetic boundary-value problems","authors":"Pavel Novák, Mehdi Eshagh, Martin Pitoňák","doi":"10.1007/s00190-024-01858-x","DOIUrl":"https://doi.org/10.1007/s00190-024-01858-x","url":null,"abstract":"<p>Physical geodesy applies potential theory to study the Earth’s gravitational field in space outside and up to a few km inside the Earth’s mass. Among various tools offered by this theory, boundary-value problems are particularly popular for the transformation or continuation of gravitational field parameters across space. Traditional problems, formulated and solved as early as in the nineteenth century, have been gradually supplemented with new problems, as new observational methods and data are available. In most cases, the emphasis is on formulating a functional relationship involving two functions in 3-D space; the values of one function are searched but unobservable; the values of the other function are observable but with errors. Such mathematical models (observation equations) are referred to as deterministic. Since observed data burdened with observational errors are used for their solutions, the relevant stochastic models must be formulated to provide uncertainties of the estimated parameters against which their quality can be evaluated. This article discusses the boundary-value problems of potential theory formulated for gravitational data currently or in the foreseeable future used by physical geodesy. Their solutions in the form of integral formulas and integral equations are reviewed, practical estimators applicable to numerical solutions of the deterministic models are formulated, and their related stochastic models are introduced. Deterministic and stochastic models represent a complete solution to problems in physical geodesy providing estimates of unknown parameters and their error variances (mean squared errors). On the other hand, analyses of error covariances can reveal problems related to the observed data and/or the design of the mathematical models. Numerical experiments demonstrate the applicability of stochastic models in practice.</p>","PeriodicalId":54822,"journal":{"name":"Journal of Geodesy","volume":"119 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2024-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141287231","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}