Pub Date : 2020-05-30DOI: 10.1007/s11200-021-0546-2
Danning Zhao, Yu Lei
A well-known property of the classical least squares (LS) extrapolation is that a fit is best in the middle of the time span of observed data, but worse near the beginning and end of the time span. This phenomenon is called the edge effect in data processing. The goal of this work is to reduce the edge effect to improve predictions of the Earth rotation parameters (ERP), which comprise the Earth’s polar motion and rotation angle (the difference between the smoothed principal form of universal time UT1 and the coordinated universal time UTC) because a best LS fitting near the end of the data used is better for extrapolation. We first use the LS extrapolation for models consisting of one polynomial and two sinusoids in combination with an autoregressive (AR) technique to extend the observed time series forward. We then re-estimate the LS extrapolation model from the extended time series to reduce the edge-effect. ERP predictions are subsequently generated by combining of the edge effect reduced LS extrapolation and AR technique, denoted as ERLS + AR. Through an example, we demonstrate that the edge-effect in the observed data fitting can be reduced by re-estimating the LS extrapolation model with the extended time series. To validate the ERLS + AR method, we calculate the ERP predictions up to 365 days into the future year-by-year for the 4-year period from 2014 to 2017 using the data from the previous 8 years. The results show that the accuracy of the short-term predictions obtained by the ERLS + AR method is comparable with that achieved by the LS + AR approach in terms of the mean absolute error (MAE). However, an accuracy improvement is found mostly for long-term predictions based on the ERLS + AR method. The MAE for the UT1 ? UTC and polar motion predictions can decrease by approximately 15% to 20%, respectively. It is therefore suggested embedding the ERLS extrapolation algorithm into the existing ERP prediction procedure.
{"title":"A technique to reduce the edge effect in least squares extrapolation for enhanced Earth orientation prediction","authors":"Danning Zhao, Yu Lei","doi":"10.1007/s11200-021-0546-2","DOIUrl":"https://doi.org/10.1007/s11200-021-0546-2","url":null,"abstract":"<p>A well-known property of the classical least squares (LS) extrapolation is that a fit is best in the middle of the time span of observed data, but worse near the beginning and end of the time span. This phenomenon is called the edge effect in data processing. The goal of this work is to reduce the edge effect to improve predictions of the Earth rotation parameters (ERP), which comprise the Earth’s polar motion and rotation angle (the difference between the smoothed principal form of universal time UT1 and the coordinated universal time UTC) because a best LS fitting near the end of the data used is better for extrapolation. We first use the LS extrapolation for models consisting of one polynomial and two sinusoids in combination with an autoregressive (AR) technique to extend the observed time series forward. We then re-estimate the LS extrapolation model from the extended time series to reduce the edge-effect. ERP predictions are subsequently generated by combining of the edge effect reduced LS extrapolation and AR technique, denoted as ERLS + AR. Through an example, we demonstrate that the edge-effect in the observed data fitting can be reduced by re-estimating the LS extrapolation model with the extended time series. To validate the ERLS + AR method, we calculate the ERP predictions up to 365 days into the future year-by-year for the 4-year period from 2014 to 2017 using the data from the previous 8 years. The results show that the accuracy of the short-term predictions obtained by the ERLS + AR method is comparable with that achieved by the LS + AR approach in terms of the mean absolute error (MAE). However, an accuracy improvement is found mostly for long-term predictions based on the ERLS + AR method. The MAE for the UT1 ? UTC and polar motion predictions can decrease by approximately 15% to 20%, respectively. It is therefore suggested embedding the ERLS extrapolation algorithm into the existing ERP prediction procedure.</p>","PeriodicalId":22001,"journal":{"name":"Studia Geophysica et Geodaetica","volume":"64 3","pages":"293 - 305"},"PeriodicalIF":0.9,"publicationDate":"2020-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/s11200-021-0546-2","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"5163889","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-04-25DOI: 10.1007/s11200-019-0493-3
Lina R. Kosareva, Dilyara M. Kuzina, Danis K. Nurgaliev, Airat G. Sitdikov, Olga V. Luneva, Damir I. Khasanov, Neil Suttie, Simo Spassov
The objective of this study is to provide a well-dated point for a future palaeosecular variation (PSV) reference curve for western Russia. For this purpose archaeomagnetic and magnetic property analyses were carried out on a pottery kiln unearthed at the UNESCO World Heritage site of ancient Bolgar, having a rather precise age dating. The archaeological context provided an age between 1340 and 1360 C.E. The characteristic remanence vector was determined through alternating field demagnetisation and Thellier-Thellier palaeointensity experiments. Some innovations were introduced regarding palaeointensity. The check testing the equality of blocking and unblocking temperature was redefined. This allowed waiving the commonly used additional zero-field cooling steps during the Thellier-Thellier experiment. Another innovation concerns the calculation of archaeointensity at structure level. A Bayesian approach was introduced for averaging individual specimen archaeointensities using a prior probability distribution of unknown uncertainties. Next, an additional prior probability distribution was used to correct for cooling rate effects. This resulted in a lower uncertainty compared to common practice and in eluding time consuming cooling rate experiments. The complex magnetic mineralogy consists of maghaemite, multi-domain haematite and Al-substituted haematite. Some samples contained also some non-stoichiometric magnetite. The magnetic mineralogy was determined through hysteresis loops, backfield and remanence decay curves, measurements of the frequency dependence of magnetic susceptibility and through low temperature magnetisation curves. Accompanying high-temperature thermomagnetic analyses revealed an excellent thermo-chemical stability of the studied specimens. Directions obtained from alternating field demagnetisation and those extracted from archaeointensity experiments are congruent and have low uncertainties. The obtained archaeomagnetic results are fairly in agreement with global geomagnetic field models and contemporary PSV data of the wider area. The geomagnetic field vector obtained for ancient Bolgar is of high quality, deserving thus its inclusion in a future PSV reference curve for European Russia.
{"title":"Archaeomagnetic investigations in Bolgar (Tatarstan)","authors":"Lina R. Kosareva, Dilyara M. Kuzina, Danis K. Nurgaliev, Airat G. Sitdikov, Olga V. Luneva, Damir I. Khasanov, Neil Suttie, Simo Spassov","doi":"10.1007/s11200-019-0493-3","DOIUrl":"https://doi.org/10.1007/s11200-019-0493-3","url":null,"abstract":"<p>The objective of this study is to provide a well-dated point for a future palaeosecular variation (PSV) reference curve for western Russia. For this purpose archaeomagnetic and magnetic property analyses were carried out on a pottery kiln unearthed at the UNESCO World Heritage site of ancient Bolgar, having a rather precise age dating. The archaeological context provided an age between 1340 and 1360 C.E. The characteristic remanence vector was determined through alternating field demagnetisation and Thellier-Thellier palaeointensity experiments. Some innovations were introduced regarding palaeointensity. The check testing the equality of blocking and unblocking temperature was redefined. This allowed waiving the commonly used additional zero-field cooling steps during the Thellier-Thellier experiment. Another innovation concerns the calculation of archaeointensity at structure level. A Bayesian approach was introduced for averaging individual specimen archaeointensities using a prior probability distribution of unknown uncertainties. Next, an additional prior probability distribution was used to correct for cooling rate effects. This resulted in a lower uncertainty compared to common practice and in eluding time consuming cooling rate experiments. The complex magnetic mineralogy consists of maghaemite, multi-domain haematite and Al-substituted haematite. Some samples contained also some non-stoichiometric magnetite. The magnetic mineralogy was determined through hysteresis loops, backfield and remanence decay curves, measurements of the frequency dependence of magnetic susceptibility and through low temperature magnetisation curves. Accompanying high-temperature thermomagnetic analyses revealed an excellent thermo-chemical stability of the studied specimens. Directions obtained from alternating field demagnetisation and those extracted from archaeointensity experiments are congruent and have low uncertainties. The obtained archaeomagnetic results are fairly in agreement with global geomagnetic field models and contemporary PSV data of the wider area. The geomagnetic field vector obtained for ancient Bolgar is of high quality, deserving thus its inclusion in a future PSV reference curve for European Russia.</p>","PeriodicalId":22001,"journal":{"name":"Studia Geophysica et Geodaetica","volume":"64 2","pages":"255 - 292"},"PeriodicalIF":0.9,"publicationDate":"2020-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/s11200-019-0493-3","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"4960764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-04-25DOI: 10.1007/s11200-019-0348-y
Libor Šachl, David Einšpigel, Zdeněk Martinec
There is a growing interest in tidal effects on the global wind-driven oceanic circulation. Tidal models used in such investigations have been verified by comparison with satellite and tide gauge data, but synthetic tests have not been published. In this paper we present three numerical tests in spherical geometry, which are suitable for testing the tidal component of global ocean models. The first test is a tsunami-like propagation of an initial Gaussian depression with no external forcing. The other two tests examine the tidal response of an ocean with an undulating bottom with four Gaussian ridges and an ocean with a flat bottom with a realistic land mask. We provide the results from six model configurations, which differ in the time-stepping scheme and computational grid used. Most of them are implemented in present-day global ocean models. Although the proposed numerical tests are simple compared to realistic simulations, their analytic solutions are not available. We thus check the conservation of time invariants to ensure that the solutions are physically meaningful. We also compare the time evolution of certain physical quantities and the differences in sea surface heights at particular time instants with respect to a reference solution. All tested time stepping schemes are suitable for tidal studies except for the Euler implicit time stepping scheme. Model configurations based on the Arakawa grids B/E use smoothing to suppress the grid-scale noise which results in an energy leakage of around 5%. The B/E-grid energy leakage is probably acceptable if we consider that tuned diffusive terms are used in real-world configurations. The C-grid and B/E-grid solutions differ in the vicinity of solid boundaries as a consequence of different boundary conditions. The B-grid and E-grid solutions are similar, unless the shape of the solid boundaries is complex due to the different shapes of the respective grid cells.
{"title":"Simple numerical tests for ocean tidal models","authors":"Libor Šachl, David Einšpigel, Zdeněk Martinec","doi":"10.1007/s11200-019-0348-y","DOIUrl":"https://doi.org/10.1007/s11200-019-0348-y","url":null,"abstract":"<p>There is a growing interest in tidal effects on the global wind-driven oceanic circulation. Tidal models used in such investigations have been verified by comparison with satellite and tide gauge data, but synthetic tests have not been published. In this paper we present three numerical tests in spherical geometry, which are suitable for testing the tidal component of global ocean models. The first test is a tsunami-like propagation of an initial Gaussian depression with no external forcing. The other two tests examine the tidal response of an ocean with an undulating bottom with four Gaussian ridges and an ocean with a flat bottom with a realistic land mask. We provide the results from six model configurations, which differ in the time-stepping scheme and computational grid used. Most of them are implemented in present-day global ocean models. Although the proposed numerical tests are simple compared to realistic simulations, their analytic solutions are not available. We thus check the conservation of time invariants to ensure that the solutions are physically meaningful. We also compare the time evolution of certain physical quantities and the differences in sea surface heights at particular time instants with respect to a reference solution. All tested time stepping schemes are suitable for tidal studies except for the Euler implicit time stepping scheme. Model configurations based on the Arakawa grids B/E use smoothing to suppress the grid-scale noise which results in an energy leakage of around 5%. The B/E-grid energy leakage is probably acceptable if we consider that tuned diffusive terms are used in real-world configurations. The C-grid and B/E-grid solutions differ in the vicinity of solid boundaries as a consequence of different boundary conditions. The B-grid and E-grid solutions are similar, unless the shape of the solid boundaries is complex due to the different shapes of the respective grid cells.</p>","PeriodicalId":22001,"journal":{"name":"Studia Geophysica et Geodaetica","volume":"64 2","pages":"202 - 240"},"PeriodicalIF":0.9,"publicationDate":"2020-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/s11200-019-0348-y","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"4960420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-04-06DOI: 10.1007/s11200-019-0356-y
Robert Duchnowski, Zbigniew Wiśniewski
This paper concerns squared Msplit(q) estimation and its robustness against outliers. Previous studies in this field have been based on theoretical approaches. It has been proven that a conventional analysis of robustness is insufficient for Msplit(q) estimation. This is due to the split of the functional model into q competitive ones and, hence, the estimation of q competitive versions of the parameters of such models. Thus, we should consider robustness from the global point of view (traditional approach) and from the local point of view (robustness in relation between two “neighboring” estimates of the parameters). Theoretical considerations have generally produced many interesting findings about the robustness of Msplit(q) estimation and the robustness of the squared Msplit(q) estimation, although some of features are asymptotic. Therefore, this paper is focused on empirical analysis of the robustness of the squared Msplit(q) estimation for finite samples and, hence, it produces information on robustness from a more practical point of view. Mostly, the analyses are based on Monte Carlo simulations. A different number of observation aggregations are considered to determine how the assumption of different values of q influence the estimation results. The analysis shows that local robustness (empirical local breakdown points) is fully compatible with the theoretical derivations. Global robustness is highly dependent on the correct assumption regarding q. If it suits reality, i.e. if we predict the number of observation aggregations and the number of outliers correctly, then the squared Msplit(q) estimation can be an alternative to classical robust estimations. This is confirmed by empirical comparisons between the method in question and the robust M-estimation (the Huber method). On the other hand, if the assumed value of q is incorrect, then the squared Msplit(q) estimation usually breaks down.
{"title":"Robustness of squared Msplit(q) estimation: Empirical analyses","authors":"Robert Duchnowski, Zbigniew Wiśniewski","doi":"10.1007/s11200-019-0356-y","DOIUrl":"https://doi.org/10.1007/s11200-019-0356-y","url":null,"abstract":"<p>This paper concerns squared M<sub>split(q)</sub> estimation and its robustness against outliers. Previous studies in this field have been based on theoretical approaches. It has been proven that a conventional analysis of robustness is insufficient for M<sub>split(q)</sub> estimation. This is due to the split of the functional model into q competitive ones and, hence, the estimation of q competitive versions of the parameters of such models. Thus, we should consider robustness from the global point of view (traditional approach) and from the local point of view (robustness in relation between two “neighboring” estimates of the parameters). Theoretical considerations have generally produced many interesting findings about the robustness of M<sub>split(q)</sub> estimation and the robustness of the squared M<sub>split(q)</sub> estimation, although some of features are asymptotic. Therefore, this paper is focused on empirical analysis of the robustness of the squared M<sub>split(q)</sub> estimation for finite samples and, hence, it produces information on robustness from a more practical point of view. Mostly, the analyses are based on Monte Carlo simulations. A different number of observation aggregations are considered to determine how the assumption of different values of q influence the estimation results. The analysis shows that local robustness (empirical local breakdown points) is fully compatible with the theoretical derivations. Global robustness is highly dependent on the correct assumption regarding q. If it suits reality, i.e. if we predict the number of observation aggregations and the number of outliers correctly, then the squared M<sub>split(q)</sub> estimation can be an alternative to classical robust estimations. This is confirmed by empirical comparisons between the method in question and the robust M-estimation (the Huber method). On the other hand, if the assumed value of q is incorrect, then the squared M<sub>split(q)</sub> estimation usually breaks down.</p>","PeriodicalId":22001,"journal":{"name":"Studia Geophysica et Geodaetica","volume":"64 2","pages":"153 - 171"},"PeriodicalIF":0.9,"publicationDate":"2020-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/s11200-019-0356-y","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"4233848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The high-density acquisition technique can improve subsurface imaging accuracy. However, it increases production cost rapidly and limits the wide application in practice. To solve this issue, the high productivity blending acquisition technology has emerged as a promising way to significantly increase the efficiency of seismic acquisition and reduce production cost. The great challenge of the blending acquisition technology lies in the severe interference noise of simultaneous sources. Therefore, the success of the blending acquisition technology relies heavily on the effectiveness of separating effective energy from the blended noise. We propose a blended noise suppression approach by using a hybrid median filter, normal moveout (NMO), and complex curvelet transform (CCT) approach. First, median filter is applied to original data after NMO correction. Second, the CCT-based thresholding denoising method is used to extract the remained effective energy from the data after median filtering to get the preliminary de-blended result. Next, the updated data are obtained by subtracting the pseudo-de-blended data of the de-blended result from the original data, and the process iterates. Last, the final de-blended result is obtained by adding the retrieved energy at each iteration until the signal-to-noise ratio satisfies the desired level. We demonstrate the effectiveness of the proposed approach on simulated synthetic and field data examples.
{"title":"Blended noise suppression using a hybrid median filter, normal moveout and complex curvelet transform approach","authors":"Lieqian Dong, Changhui Wang, Mugang Zhang, Deying Wang, Xiaofeng Liang","doi":"10.1007/s11200-020-0269-9","DOIUrl":"https://doi.org/10.1007/s11200-020-0269-9","url":null,"abstract":"<p>The high-density acquisition technique can improve subsurface imaging accuracy. However, it increases production cost rapidly and limits the wide application in practice. To solve this issue, the high productivity blending acquisition technology has emerged as a promising way to significantly increase the efficiency of seismic acquisition and reduce production cost. The great challenge of the blending acquisition technology lies in the severe interference noise of simultaneous sources. Therefore, the success of the blending acquisition technology relies heavily on the effectiveness of separating effective energy from the blended noise. We propose a blended noise suppression approach by using a hybrid median filter, normal moveout (NMO), and complex curvelet transform (CCT) approach. First, median filter is applied to original data after NMO correction. Second, the CCT-based thresholding denoising method is used to extract the remained effective energy from the data after median filtering to get the preliminary de-blended result. Next, the updated data are obtained by subtracting the pseudo-de-blended data of the de-blended result from the original data, and the process iterates. Last, the final de-blended result is obtained by adding the retrieved energy at each iteration until the signal-to-noise ratio satisfies the desired level. We demonstrate the effectiveness of the proposed approach on simulated synthetic and field data examples.</p>","PeriodicalId":22001,"journal":{"name":"Studia Geophysica et Geodaetica","volume":"64 2","pages":"241 - 254"},"PeriodicalIF":0.9,"publicationDate":"2020-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/s11200-020-0269-9","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"4237911","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-04-06DOI: 10.1007/s11200-019-0273-0
Saber Jahanjooy, Mohammad Pirouei, Kamal Kolo
To study the subsurface features and structures, the gravity effects of the surrounding topography should be reduced from acquired gravity field data. Several methods are available to calculate terrain effects on each gravity station. Some of these methods are tedious and time consuming due to a large number of calculations, long-distance terrain effects, and minimum acceptable errors. The other fast methods do not fulfill the accuracy requirement for local surveys. In rough topographies, using average elevation for sectors of the calculation area leads to overestimation or underestimation of the terrain effect. Since most of the terrain correction methods employ the pre-divided web or mesh for the survey area, the used sectors do not match geographical features with distinct mass centers. An Optimally Selecting Sectors (OSS) algorithm is proposed, which automatically partitions the surrounding area to a set of sectors in a way that separates different mass centers, finds optimum elevation of these sectors, and calculates terrain effect at the gravity stations. This new procedure improves the accuracy of calculated terrain effects. A proper tolerance inside the algorithm controls the accuracy of the method. Defining this tolerance relies on the application of gravity data. The application of this method on synthetic models with different geometrical shapes and real digital elevation data of a mountainous area at the Kurdistan region shows improvement in the accuracy of terrain correction. However, the proposed method OSS introduces extra calculation compared to some of the previous terrain correction methods.
{"title":"High accuracy gravity terrain correction by Optimally Selecting Sectors algorithm based on Hammer charts method","authors":"Saber Jahanjooy, Mohammad Pirouei, Kamal Kolo","doi":"10.1007/s11200-019-0273-0","DOIUrl":"https://doi.org/10.1007/s11200-019-0273-0","url":null,"abstract":"<p>To study the subsurface features and structures, the gravity effects of the surrounding topography should be reduced from acquired gravity field data. Several methods are available to calculate terrain effects on each gravity station. Some of these methods are tedious and time consuming due to a large number of calculations, long-distance terrain effects, and minimum acceptable errors. The other fast methods do not fulfill the accuracy requirement for local surveys. In rough topographies, using average elevation for sectors of the calculation area leads to overestimation or underestimation of the terrain effect. Since most of the terrain correction methods employ the pre-divided web or mesh for the survey area, the used sectors do not match geographical features with distinct mass centers. An Optimally Selecting Sectors (OSS) algorithm is proposed, which automatically partitions the surrounding area to a set of sectors in a way that separates different mass centers, finds optimum elevation of these sectors, and calculates terrain effect at the gravity stations. This new procedure improves the accuracy of calculated terrain effects. A proper tolerance inside the algorithm controls the accuracy of the method. Defining this tolerance relies on the application of gravity data. The application of this method on synthetic models with different geometrical shapes and real digital elevation data of a mountainous area at the Kurdistan region shows improvement in the accuracy of terrain correction. However, the proposed method OSS introduces extra calculation compared to some of the previous terrain correction methods.</p>","PeriodicalId":22001,"journal":{"name":"Studia Geophysica et Geodaetica","volume":"64 2","pages":"172 - 185"},"PeriodicalIF":0.9,"publicationDate":"2020-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/s11200-019-0273-0","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"4236961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-04-06DOI: 10.1007/s11200-018-1240-x
Hussein A. Abd-Elmotaal, Norbert Kühtreiber
The determination of the gravimetric geoid is based on the magnitude of the gravity observed at the topographic surface of the Earth. In order to satisfy Laplace’s equation, the masses between the surface of the Earth and the geoid must be removed or shifted inside the geoid. Then the gravity values have to be reduced to the geoid, forming the boundary values on the boundary surface. Gravity reduction techniques using unclassified Digital Terrain Models (DTM) usually presume that negative elevations are reserved for ocean stations. In case of Qattara Depression, the elevations are negative, i.e., below sea level. This leads to an obvious error in the topographic-isostatic reduction using, for example, TC-program employing unclassified DTM by assuming water masses filling the depression instead of air, besides computing at the non-existing sea level instead of computing at the actual negative topography. The aim of this paper is to determine the effect of Qattara Depression on gravity reduction and geoid computation, as a prototype of the effect of the unclassified land depressions on gravity reduction and geoid determination. The results show that the effect of Qattara Depression on the gravity reduction reaches 20 mGal and is restricted only to the depression area, while its effect on the geoid exceeds 1 m and has a regional effect which extends over a distance of about 1000 km.
{"title":"Effect of Qattara Depression on gravity and geoid using unclassified digital terrain models","authors":"Hussein A. Abd-Elmotaal, Norbert Kühtreiber","doi":"10.1007/s11200-018-1240-x","DOIUrl":"https://doi.org/10.1007/s11200-018-1240-x","url":null,"abstract":"<p>The determination of the gravimetric geoid is based on the magnitude of the gravity observed at the topographic surface of the Earth. In order to satisfy Laplace’s equation, the masses between the surface of the Earth and the geoid must be removed or shifted inside the geoid. Then the gravity values have to be reduced to the geoid, forming the boundary values on the boundary surface. Gravity reduction techniques using unclassified Digital Terrain Models (DTM) usually presume that negative elevations are reserved for ocean stations. In case of Qattara Depression, the elevations are negative, i.e., below sea level. This leads to an obvious error in the topographic-isostatic reduction using, for example, TC-program employing unclassified DTM by assuming water masses filling the depression instead of air, besides computing at the non-existing sea level instead of computing at the actual negative topography. The aim of this paper is to determine the effect of Qattara Depression on gravity reduction and geoid computation, as a prototype of the effect of the unclassified land depressions on gravity reduction and geoid determination. The results show that the effect of Qattara Depression on the gravity reduction reaches 20 mGal and is restricted only to the depression area, while its effect on the geoid exceeds 1 m and has a regional effect which extends over a distance of about 1000 km.</p>","PeriodicalId":22001,"journal":{"name":"Studia Geophysica et Geodaetica","volume":"64 2","pages":"186 - 201"},"PeriodicalIF":0.9,"publicationDate":"2020-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/s11200-018-1240-x","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"4236918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-01-17DOI: 10.1007/s11200-019-1172-0
Hao Zhao, Anders Ueland Waldeland, Dany Rueda Serrano, Martin Tygel, Einar Iversen
Advanced seismic imaging and inversion are dependent on a velocity model that is sufficiently accurate to render reliable and meaningful results. For that reason, methods for extracting such velocity models from seismic data are always in high demand and are topics of active investigation. Velocity models can be obtained from both the time and depth domains. Relying on the former, time migration is an inexpensive, quick and robust process. In spite of its limitations, especially in the case of complex geologies, time migration can, in many instances (e.g. simple to moderate geological structures), produce image results compatible to the those required for the project at hand. An accurate time-velocity model can be of great use in the construction of an initial depth-velocity model, from which a high-quality depth image can be produced. Based on available explicit and analytical expressions that relate the kinematic attributes (namely, traveltimes and local slopes) of local events in the recording (demigration) and migrated domains, we revisit tomographic methodologies for velocity-model building, with a specific focus on the time domain, and on those that makes use of local slopes, as well as traveltimes, as key attributes for imaging. We also adopt the strategy of estimating local inclinations in the time-migrated domain (where we have less noise and better focus) and use demigration to estimate those inclinations in the recording domain. On the theoretical side, the main contributions of this work are twofold: 1) we base the velocity model estimation on kinematic migration/demigration techniques that are nonlinear (and therefore more accurate than simplistic linear approaches) and 2) the corresponding Fréchet derivatives take into account that the velocity model is laterally heterogeneous. In addition to providing the comprehensive mathematical algorithms involved, three proof-of-concept numerical examples are demonstrated, which confirm the potential of our methodology.
{"title":"Time-migration velocity estimation using Fréchet derivatives based on nonlinear kinematic migration/demigration solvers","authors":"Hao Zhao, Anders Ueland Waldeland, Dany Rueda Serrano, Martin Tygel, Einar Iversen","doi":"10.1007/s11200-019-1172-0","DOIUrl":"https://doi.org/10.1007/s11200-019-1172-0","url":null,"abstract":"<p>Advanced seismic imaging and inversion are dependent on a velocity model that is sufficiently accurate to render reliable and meaningful results. For that reason, methods for extracting such velocity models from seismic data are always in high demand and are topics of active investigation. Velocity models can be obtained from both the time and depth domains. Relying on the former, time migration is an inexpensive, quick and robust process. In spite of its limitations, especially in the case of complex geologies, time migration can, in many instances (e.g. simple to moderate geological structures), produce image results compatible to the those required for the project at hand. An accurate time-velocity model can be of great use in the construction of an initial depth-velocity model, from which a high-quality depth image can be produced. Based on available explicit and analytical expressions that relate the kinematic attributes (namely, traveltimes and local slopes) of local events in the recording (demigration) and migrated domains, we revisit tomographic methodologies for velocity-model building, with a specific focus on the time domain, and on those that makes use of local slopes, as well as traveltimes, as key attributes for imaging. We also adopt the strategy of estimating local inclinations in the time-migrated domain (where we have less noise and better focus) and use demigration to estimate those inclinations in the recording domain. On the theoretical side, the main contributions of this work are twofold: 1) we base the velocity model estimation on kinematic migration/demigration techniques that are nonlinear (and therefore more accurate than simplistic linear approaches) and 2) the corresponding Fréchet derivatives take into account that the velocity model is laterally heterogeneous. In addition to providing the comprehensive mathematical algorithms involved, three proof-of-concept numerical examples are demonstrated, which confirm the potential of our methodology.</p>","PeriodicalId":22001,"journal":{"name":"Studia Geophysica et Geodaetica","volume":"64 1","pages":"26 - 75"},"PeriodicalIF":0.9,"publicationDate":"2020-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/s11200-019-1172-0","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"4677454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-01-17DOI: 10.1007/s11200-019-1247-y
Ayman N. Qadrouh, José M. Carcione, Mamdoh Alajmi, Jing Ba
An elastic two-phase composite, with no restriction on the shape of the two phases, has stiffness bounds given by the Reuss and Voigt equations, and a narrower range determined by the Hashin-Shtrikman bounds. Averages are given by the Voigt-Reuss-Hill, Hashin-Shtrikman, Gassmann, Backus and Wyllie equations. To obtain stiffness bounds and averages, we invoke the correspondence principle to compute the solution of the viscoelastic problem from the corresponding elastic solution. Then, seismic velocities and attenuation are established for the above — physical and heuristic — models which account for general geometrical shapes, unlike the Backus average. The approach is relevant to the seismic characterization of solid composites such as hydrocarbon source rocks.
{"title":"Bounds and averages of seismic quality factor Q","authors":"Ayman N. Qadrouh, José M. Carcione, Mamdoh Alajmi, Jing Ba","doi":"10.1007/s11200-019-1247-y","DOIUrl":"https://doi.org/10.1007/s11200-019-1247-y","url":null,"abstract":"<p>An elastic two-phase composite, with no restriction on the shape of the two phases, has stiffness bounds given by the Reuss and Voigt equations, and a narrower range determined by the Hashin-Shtrikman bounds. Averages are given by the Voigt-Reuss-Hill, Hashin-Shtrikman, Gassmann, Backus and Wyllie equations. To obtain stiffness bounds and averages, we invoke the correspondence principle to compute the solution of the viscoelastic problem from the corresponding elastic solution. Then, seismic velocities and attenuation are established for the above — physical and heuristic — models which account for general geometrical shapes, unlike the Backus average. The approach is relevant to the seismic characterization of solid composites such as hydrocarbon source rocks.</p>","PeriodicalId":22001,"journal":{"name":"Studia Geophysica et Geodaetica","volume":"64 1","pages":"100 - 113"},"PeriodicalIF":0.9,"publicationDate":"2020-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/s11200-019-1247-y","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"4682029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-01-10DOI: 10.1007/s11200-019-1942-8
Carlos A. Vasquez, Sabrina Y. Fazzito
A simple phenomenological model founded on Lorentzian functions is evaluated on the first derivative of magnetic hysteresis loops from several artificial samples with iron oxide/oxyhydroxide mixtures imitating natural sediments. The approach, which shows that hysteresis loops can be described by elementary analytical functions and provides estimates of magnetization parameters to a satisfactory degree of confidence, is applied with the help of standard data analysis software. Distorted hysteresis loops (wasp-waisted, goose-necked and pot-bellied shaped) from simulations and artificial samples from a previous work are reproduced by the model which allows to straightforwardly unmix the ferromagnetic signal from different minerals like magnetite, greigite, haematite and goethite. The analyses reveal that the contribution from the ferrimagnetic fraction, though present in a minor concentration (≤2.15 wt%), dominates the magnetization.
{"title":"Simple hysteresis loop model for rock magnetic analysis","authors":"Carlos A. Vasquez, Sabrina Y. Fazzito","doi":"10.1007/s11200-019-1942-8","DOIUrl":"https://doi.org/10.1007/s11200-019-1942-8","url":null,"abstract":"<p>A simple phenomenological model founded on Lorentzian functions is evaluated on the first derivative of magnetic hysteresis loops from several artificial samples with iron oxide/oxyhydroxide mixtures imitating natural sediments. The approach, which shows that hysteresis loops can be described by elementary analytical functions and provides estimates of magnetization parameters to a satisfactory degree of confidence, is applied with the help of standard data analysis software. Distorted hysteresis loops (wasp-waisted, goose-necked and pot-bellied shaped) from simulations and artificial samples from a previous work are reproduced by the model which allows to straightforwardly unmix the ferromagnetic signal from different minerals like magnetite, greigite, haematite and goethite. The analyses reveal that the contribution from the ferrimagnetic fraction, though present in a minor concentration (≤2.15 wt%), dominates the magnetization.</p>","PeriodicalId":22001,"journal":{"name":"Studia Geophysica et Geodaetica","volume":"64 1","pages":"114 - 129"},"PeriodicalIF":0.9,"publicationDate":"2020-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/s11200-019-1942-8","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"4419548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}