Pub Date : 2020-04-06DOI: 10.1007/s11200-019-0356-y
Robert Duchnowski, Zbigniew Wiśniewski
This paper concerns squared Msplit(q) estimation and its robustness against outliers. Previous studies in this field have been based on theoretical approaches. It has been proven that a conventional analysis of robustness is insufficient for Msplit(q) estimation. This is due to the split of the functional model into q competitive ones and, hence, the estimation of q competitive versions of the parameters of such models. Thus, we should consider robustness from the global point of view (traditional approach) and from the local point of view (robustness in relation between two “neighboring” estimates of the parameters). Theoretical considerations have generally produced many interesting findings about the robustness of Msplit(q) estimation and the robustness of the squared Msplit(q) estimation, although some of features are asymptotic. Therefore, this paper is focused on empirical analysis of the robustness of the squared Msplit(q) estimation for finite samples and, hence, it produces information on robustness from a more practical point of view. Mostly, the analyses are based on Monte Carlo simulations. A different number of observation aggregations are considered to determine how the assumption of different values of q influence the estimation results. The analysis shows that local robustness (empirical local breakdown points) is fully compatible with the theoretical derivations. Global robustness is highly dependent on the correct assumption regarding q. If it suits reality, i.e. if we predict the number of observation aggregations and the number of outliers correctly, then the squared Msplit(q) estimation can be an alternative to classical robust estimations. This is confirmed by empirical comparisons between the method in question and the robust M-estimation (the Huber method). On the other hand, if the assumed value of q is incorrect, then the squared Msplit(q) estimation usually breaks down.
{"title":"Robustness of squared Msplit(q) estimation: Empirical analyses","authors":"Robert Duchnowski, Zbigniew Wiśniewski","doi":"10.1007/s11200-019-0356-y","DOIUrl":"https://doi.org/10.1007/s11200-019-0356-y","url":null,"abstract":"<p>This paper concerns squared M<sub>split(q)</sub> estimation and its robustness against outliers. Previous studies in this field have been based on theoretical approaches. It has been proven that a conventional analysis of robustness is insufficient for M<sub>split(q)</sub> estimation. This is due to the split of the functional model into q competitive ones and, hence, the estimation of q competitive versions of the parameters of such models. Thus, we should consider robustness from the global point of view (traditional approach) and from the local point of view (robustness in relation between two “neighboring” estimates of the parameters). Theoretical considerations have generally produced many interesting findings about the robustness of M<sub>split(q)</sub> estimation and the robustness of the squared M<sub>split(q)</sub> estimation, although some of features are asymptotic. Therefore, this paper is focused on empirical analysis of the robustness of the squared M<sub>split(q)</sub> estimation for finite samples and, hence, it produces information on robustness from a more practical point of view. Mostly, the analyses are based on Monte Carlo simulations. A different number of observation aggregations are considered to determine how the assumption of different values of q influence the estimation results. The analysis shows that local robustness (empirical local breakdown points) is fully compatible with the theoretical derivations. Global robustness is highly dependent on the correct assumption regarding q. If it suits reality, i.e. if we predict the number of observation aggregations and the number of outliers correctly, then the squared M<sub>split(q)</sub> estimation can be an alternative to classical robust estimations. This is confirmed by empirical comparisons between the method in question and the robust M-estimation (the Huber method). On the other hand, if the assumed value of q is incorrect, then the squared M<sub>split(q)</sub> estimation usually breaks down.</p>","PeriodicalId":22001,"journal":{"name":"Studia Geophysica et Geodaetica","volume":"64 2","pages":"153 - 171"},"PeriodicalIF":0.9,"publicationDate":"2020-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/s11200-019-0356-y","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"4233848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The high-density acquisition technique can improve subsurface imaging accuracy. However, it increases production cost rapidly and limits the wide application in practice. To solve this issue, the high productivity blending acquisition technology has emerged as a promising way to significantly increase the efficiency of seismic acquisition and reduce production cost. The great challenge of the blending acquisition technology lies in the severe interference noise of simultaneous sources. Therefore, the success of the blending acquisition technology relies heavily on the effectiveness of separating effective energy from the blended noise. We propose a blended noise suppression approach by using a hybrid median filter, normal moveout (NMO), and complex curvelet transform (CCT) approach. First, median filter is applied to original data after NMO correction. Second, the CCT-based thresholding denoising method is used to extract the remained effective energy from the data after median filtering to get the preliminary de-blended result. Next, the updated data are obtained by subtracting the pseudo-de-blended data of the de-blended result from the original data, and the process iterates. Last, the final de-blended result is obtained by adding the retrieved energy at each iteration until the signal-to-noise ratio satisfies the desired level. We demonstrate the effectiveness of the proposed approach on simulated synthetic and field data examples.
{"title":"Blended noise suppression using a hybrid median filter, normal moveout and complex curvelet transform approach","authors":"Lieqian Dong, Changhui Wang, Mugang Zhang, Deying Wang, Xiaofeng Liang","doi":"10.1007/s11200-020-0269-9","DOIUrl":"https://doi.org/10.1007/s11200-020-0269-9","url":null,"abstract":"<p>The high-density acquisition technique can improve subsurface imaging accuracy. However, it increases production cost rapidly and limits the wide application in practice. To solve this issue, the high productivity blending acquisition technology has emerged as a promising way to significantly increase the efficiency of seismic acquisition and reduce production cost. The great challenge of the blending acquisition technology lies in the severe interference noise of simultaneous sources. Therefore, the success of the blending acquisition technology relies heavily on the effectiveness of separating effective energy from the blended noise. We propose a blended noise suppression approach by using a hybrid median filter, normal moveout (NMO), and complex curvelet transform (CCT) approach. First, median filter is applied to original data after NMO correction. Second, the CCT-based thresholding denoising method is used to extract the remained effective energy from the data after median filtering to get the preliminary de-blended result. Next, the updated data are obtained by subtracting the pseudo-de-blended data of the de-blended result from the original data, and the process iterates. Last, the final de-blended result is obtained by adding the retrieved energy at each iteration until the signal-to-noise ratio satisfies the desired level. We demonstrate the effectiveness of the proposed approach on simulated synthetic and field data examples.</p>","PeriodicalId":22001,"journal":{"name":"Studia Geophysica et Geodaetica","volume":"64 2","pages":"241 - 254"},"PeriodicalIF":0.9,"publicationDate":"2020-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/s11200-020-0269-9","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"4237911","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-04-06DOI: 10.1007/s11200-018-1240-x
Hussein A. Abd-Elmotaal, Norbert Kühtreiber
The determination of the gravimetric geoid is based on the magnitude of the gravity observed at the topographic surface of the Earth. In order to satisfy Laplace’s equation, the masses between the surface of the Earth and the geoid must be removed or shifted inside the geoid. Then the gravity values have to be reduced to the geoid, forming the boundary values on the boundary surface. Gravity reduction techniques using unclassified Digital Terrain Models (DTM) usually presume that negative elevations are reserved for ocean stations. In case of Qattara Depression, the elevations are negative, i.e., below sea level. This leads to an obvious error in the topographic-isostatic reduction using, for example, TC-program employing unclassified DTM by assuming water masses filling the depression instead of air, besides computing at the non-existing sea level instead of computing at the actual negative topography. The aim of this paper is to determine the effect of Qattara Depression on gravity reduction and geoid computation, as a prototype of the effect of the unclassified land depressions on gravity reduction and geoid determination. The results show that the effect of Qattara Depression on the gravity reduction reaches 20 mGal and is restricted only to the depression area, while its effect on the geoid exceeds 1 m and has a regional effect which extends over a distance of about 1000 km.
{"title":"Effect of Qattara Depression on gravity and geoid using unclassified digital terrain models","authors":"Hussein A. Abd-Elmotaal, Norbert Kühtreiber","doi":"10.1007/s11200-018-1240-x","DOIUrl":"https://doi.org/10.1007/s11200-018-1240-x","url":null,"abstract":"<p>The determination of the gravimetric geoid is based on the magnitude of the gravity observed at the topographic surface of the Earth. In order to satisfy Laplace’s equation, the masses between the surface of the Earth and the geoid must be removed or shifted inside the geoid. Then the gravity values have to be reduced to the geoid, forming the boundary values on the boundary surface. Gravity reduction techniques using unclassified Digital Terrain Models (DTM) usually presume that negative elevations are reserved for ocean stations. In case of Qattara Depression, the elevations are negative, i.e., below sea level. This leads to an obvious error in the topographic-isostatic reduction using, for example, TC-program employing unclassified DTM by assuming water masses filling the depression instead of air, besides computing at the non-existing sea level instead of computing at the actual negative topography. The aim of this paper is to determine the effect of Qattara Depression on gravity reduction and geoid computation, as a prototype of the effect of the unclassified land depressions on gravity reduction and geoid determination. The results show that the effect of Qattara Depression on the gravity reduction reaches 20 mGal and is restricted only to the depression area, while its effect on the geoid exceeds 1 m and has a regional effect which extends over a distance of about 1000 km.</p>","PeriodicalId":22001,"journal":{"name":"Studia Geophysica et Geodaetica","volume":"64 2","pages":"186 - 201"},"PeriodicalIF":0.9,"publicationDate":"2020-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/s11200-018-1240-x","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"4236918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-04-06DOI: 10.1007/s11200-019-0273-0
Saber Jahanjooy, Mohammad Pirouei, Kamal Kolo
To study the subsurface features and structures, the gravity effects of the surrounding topography should be reduced from acquired gravity field data. Several methods are available to calculate terrain effects on each gravity station. Some of these methods are tedious and time consuming due to a large number of calculations, long-distance terrain effects, and minimum acceptable errors. The other fast methods do not fulfill the accuracy requirement for local surveys. In rough topographies, using average elevation for sectors of the calculation area leads to overestimation or underestimation of the terrain effect. Since most of the terrain correction methods employ the pre-divided web or mesh for the survey area, the used sectors do not match geographical features with distinct mass centers. An Optimally Selecting Sectors (OSS) algorithm is proposed, which automatically partitions the surrounding area to a set of sectors in a way that separates different mass centers, finds optimum elevation of these sectors, and calculates terrain effect at the gravity stations. This new procedure improves the accuracy of calculated terrain effects. A proper tolerance inside the algorithm controls the accuracy of the method. Defining this tolerance relies on the application of gravity data. The application of this method on synthetic models with different geometrical shapes and real digital elevation data of a mountainous area at the Kurdistan region shows improvement in the accuracy of terrain correction. However, the proposed method OSS introduces extra calculation compared to some of the previous terrain correction methods.
{"title":"High accuracy gravity terrain correction by Optimally Selecting Sectors algorithm based on Hammer charts method","authors":"Saber Jahanjooy, Mohammad Pirouei, Kamal Kolo","doi":"10.1007/s11200-019-0273-0","DOIUrl":"https://doi.org/10.1007/s11200-019-0273-0","url":null,"abstract":"<p>To study the subsurface features and structures, the gravity effects of the surrounding topography should be reduced from acquired gravity field data. Several methods are available to calculate terrain effects on each gravity station. Some of these methods are tedious and time consuming due to a large number of calculations, long-distance terrain effects, and minimum acceptable errors. The other fast methods do not fulfill the accuracy requirement for local surveys. In rough topographies, using average elevation for sectors of the calculation area leads to overestimation or underestimation of the terrain effect. Since most of the terrain correction methods employ the pre-divided web or mesh for the survey area, the used sectors do not match geographical features with distinct mass centers. An Optimally Selecting Sectors (OSS) algorithm is proposed, which automatically partitions the surrounding area to a set of sectors in a way that separates different mass centers, finds optimum elevation of these sectors, and calculates terrain effect at the gravity stations. This new procedure improves the accuracy of calculated terrain effects. A proper tolerance inside the algorithm controls the accuracy of the method. Defining this tolerance relies on the application of gravity data. The application of this method on synthetic models with different geometrical shapes and real digital elevation data of a mountainous area at the Kurdistan region shows improvement in the accuracy of terrain correction. However, the proposed method OSS introduces extra calculation compared to some of the previous terrain correction methods.</p>","PeriodicalId":22001,"journal":{"name":"Studia Geophysica et Geodaetica","volume":"64 2","pages":"172 - 185"},"PeriodicalIF":0.9,"publicationDate":"2020-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/s11200-019-0273-0","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"4236961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-01-17DOI: 10.1007/s11200-019-1172-0
Hao Zhao, Anders Ueland Waldeland, Dany Rueda Serrano, Martin Tygel, Einar Iversen
Advanced seismic imaging and inversion are dependent on a velocity model that is sufficiently accurate to render reliable and meaningful results. For that reason, methods for extracting such velocity models from seismic data are always in high demand and are topics of active investigation. Velocity models can be obtained from both the time and depth domains. Relying on the former, time migration is an inexpensive, quick and robust process. In spite of its limitations, especially in the case of complex geologies, time migration can, in many instances (e.g. simple to moderate geological structures), produce image results compatible to the those required for the project at hand. An accurate time-velocity model can be of great use in the construction of an initial depth-velocity model, from which a high-quality depth image can be produced. Based on available explicit and analytical expressions that relate the kinematic attributes (namely, traveltimes and local slopes) of local events in the recording (demigration) and migrated domains, we revisit tomographic methodologies for velocity-model building, with a specific focus on the time domain, and on those that makes use of local slopes, as well as traveltimes, as key attributes for imaging. We also adopt the strategy of estimating local inclinations in the time-migrated domain (where we have less noise and better focus) and use demigration to estimate those inclinations in the recording domain. On the theoretical side, the main contributions of this work are twofold: 1) we base the velocity model estimation on kinematic migration/demigration techniques that are nonlinear (and therefore more accurate than simplistic linear approaches) and 2) the corresponding Fréchet derivatives take into account that the velocity model is laterally heterogeneous. In addition to providing the comprehensive mathematical algorithms involved, three proof-of-concept numerical examples are demonstrated, which confirm the potential of our methodology.
{"title":"Time-migration velocity estimation using Fréchet derivatives based on nonlinear kinematic migration/demigration solvers","authors":"Hao Zhao, Anders Ueland Waldeland, Dany Rueda Serrano, Martin Tygel, Einar Iversen","doi":"10.1007/s11200-019-1172-0","DOIUrl":"https://doi.org/10.1007/s11200-019-1172-0","url":null,"abstract":"<p>Advanced seismic imaging and inversion are dependent on a velocity model that is sufficiently accurate to render reliable and meaningful results. For that reason, methods for extracting such velocity models from seismic data are always in high demand and are topics of active investigation. Velocity models can be obtained from both the time and depth domains. Relying on the former, time migration is an inexpensive, quick and robust process. In spite of its limitations, especially in the case of complex geologies, time migration can, in many instances (e.g. simple to moderate geological structures), produce image results compatible to the those required for the project at hand. An accurate time-velocity model can be of great use in the construction of an initial depth-velocity model, from which a high-quality depth image can be produced. Based on available explicit and analytical expressions that relate the kinematic attributes (namely, traveltimes and local slopes) of local events in the recording (demigration) and migrated domains, we revisit tomographic methodologies for velocity-model building, with a specific focus on the time domain, and on those that makes use of local slopes, as well as traveltimes, as key attributes for imaging. We also adopt the strategy of estimating local inclinations in the time-migrated domain (where we have less noise and better focus) and use demigration to estimate those inclinations in the recording domain. On the theoretical side, the main contributions of this work are twofold: 1) we base the velocity model estimation on kinematic migration/demigration techniques that are nonlinear (and therefore more accurate than simplistic linear approaches) and 2) the corresponding Fréchet derivatives take into account that the velocity model is laterally heterogeneous. In addition to providing the comprehensive mathematical algorithms involved, three proof-of-concept numerical examples are demonstrated, which confirm the potential of our methodology.</p>","PeriodicalId":22001,"journal":{"name":"Studia Geophysica et Geodaetica","volume":"64 1","pages":"26 - 75"},"PeriodicalIF":0.9,"publicationDate":"2020-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/s11200-019-1172-0","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"4677454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-01-17DOI: 10.1007/s11200-019-1247-y
Ayman N. Qadrouh, José M. Carcione, Mamdoh Alajmi, Jing Ba
An elastic two-phase composite, with no restriction on the shape of the two phases, has stiffness bounds given by the Reuss and Voigt equations, and a narrower range determined by the Hashin-Shtrikman bounds. Averages are given by the Voigt-Reuss-Hill, Hashin-Shtrikman, Gassmann, Backus and Wyllie equations. To obtain stiffness bounds and averages, we invoke the correspondence principle to compute the solution of the viscoelastic problem from the corresponding elastic solution. Then, seismic velocities and attenuation are established for the above — physical and heuristic — models which account for general geometrical shapes, unlike the Backus average. The approach is relevant to the seismic characterization of solid composites such as hydrocarbon source rocks.
{"title":"Bounds and averages of seismic quality factor Q","authors":"Ayman N. Qadrouh, José M. Carcione, Mamdoh Alajmi, Jing Ba","doi":"10.1007/s11200-019-1247-y","DOIUrl":"https://doi.org/10.1007/s11200-019-1247-y","url":null,"abstract":"<p>An elastic two-phase composite, with no restriction on the shape of the two phases, has stiffness bounds given by the Reuss and Voigt equations, and a narrower range determined by the Hashin-Shtrikman bounds. Averages are given by the Voigt-Reuss-Hill, Hashin-Shtrikman, Gassmann, Backus and Wyllie equations. To obtain stiffness bounds and averages, we invoke the correspondence principle to compute the solution of the viscoelastic problem from the corresponding elastic solution. Then, seismic velocities and attenuation are established for the above — physical and heuristic — models which account for general geometrical shapes, unlike the Backus average. The approach is relevant to the seismic characterization of solid composites such as hydrocarbon source rocks.</p>","PeriodicalId":22001,"journal":{"name":"Studia Geophysica et Geodaetica","volume":"64 1","pages":"100 - 113"},"PeriodicalIF":0.9,"publicationDate":"2020-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/s11200-019-1247-y","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"4682029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-01-10DOI: 10.1007/s11200-019-1942-8
Carlos A. Vasquez, Sabrina Y. Fazzito
A simple phenomenological model founded on Lorentzian functions is evaluated on the first derivative of magnetic hysteresis loops from several artificial samples with iron oxide/oxyhydroxide mixtures imitating natural sediments. The approach, which shows that hysteresis loops can be described by elementary analytical functions and provides estimates of magnetization parameters to a satisfactory degree of confidence, is applied with the help of standard data analysis software. Distorted hysteresis loops (wasp-waisted, goose-necked and pot-bellied shaped) from simulations and artificial samples from a previous work are reproduced by the model which allows to straightforwardly unmix the ferromagnetic signal from different minerals like magnetite, greigite, haematite and goethite. The analyses reveal that the contribution from the ferrimagnetic fraction, though present in a minor concentration (≤2.15 wt%), dominates the magnetization.
{"title":"Simple hysteresis loop model for rock magnetic analysis","authors":"Carlos A. Vasquez, Sabrina Y. Fazzito","doi":"10.1007/s11200-019-1942-8","DOIUrl":"https://doi.org/10.1007/s11200-019-1942-8","url":null,"abstract":"<p>A simple phenomenological model founded on Lorentzian functions is evaluated on the first derivative of magnetic hysteresis loops from several artificial samples with iron oxide/oxyhydroxide mixtures imitating natural sediments. The approach, which shows that hysteresis loops can be described by elementary analytical functions and provides estimates of magnetization parameters to a satisfactory degree of confidence, is applied with the help of standard data analysis software. Distorted hysteresis loops (wasp-waisted, goose-necked and pot-bellied shaped) from simulations and artificial samples from a previous work are reproduced by the model which allows to straightforwardly unmix the ferromagnetic signal from different minerals like magnetite, greigite, haematite and goethite. The analyses reveal that the contribution from the ferrimagnetic fraction, though present in a minor concentration (≤2.15 wt%), dominates the magnetization.</p>","PeriodicalId":22001,"journal":{"name":"Studia Geophysica et Geodaetica","volume":"64 1","pages":"114 - 129"},"PeriodicalIF":0.9,"publicationDate":"2020-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/s11200-019-1942-8","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"4419548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-01-10DOI: 10.1007/s11200-019-1217-4
Kateřina Vymazalová, Lenka Vargová, Ladislava Horáčková, Jiří Kala, Michal Přichystal, Ivo Světlík, Kateřina Pachnerová Brabcová, Veronika Brychová
The dating of skeletal remains in archaeology is difficult, especially at findings without burial equipment. In this case, apart from literary and iconographic sources, anthropological and palaeopathological analyses, the radiocarbon dating method can also be used. We present an example where we used this procedure in the dating of the skeletal remains of an anonymous recent mass grave, found in the cellars of one of the houses in Brno (Czech Republic). On the basis of an assessment of the archaeological and anthropological context, in combination with radiocarbon dating, it could be concluded that the found skeletal remains were most likely of soldiers who died in the provisional military hospital as a result of injury or infection after the Battle of Austerlitz in 1805. An alternative hypothesis, that they are the remains of soldiers who died in the Battle of Hradec Králové in 1866, was excluded by radiocarbon dating.
{"title":"Use of the radiocarbon method for dating of skeletal remains of a mass grave (Brno, the Czech Republic)","authors":"Kateřina Vymazalová, Lenka Vargová, Ladislava Horáčková, Jiří Kala, Michal Přichystal, Ivo Světlík, Kateřina Pachnerová Brabcová, Veronika Brychová","doi":"10.1007/s11200-019-1217-4","DOIUrl":"https://doi.org/10.1007/s11200-019-1217-4","url":null,"abstract":"<p>The dating of skeletal remains in archaeology is difficult, especially at findings without burial equipment. In this case, apart from literary and iconographic sources, anthropological and palaeopathological analyses, the radiocarbon dating method can also be used. We present an example where we used this procedure in the dating of the skeletal remains of an anonymous recent mass grave, found in the cellars of one of the houses in Brno (Czech Republic). On the basis of an assessment of the archaeological and anthropological context, in combination with radiocarbon dating, it could be concluded that the found skeletal remains were most likely of soldiers who died in the provisional military hospital as a result of injury or infection after the Battle of Austerlitz in 1805. An alternative hypothesis, that they are the remains of soldiers who died in the Battle of Hradec Králové in 1866, was excluded by radiocarbon dating.</p>","PeriodicalId":22001,"journal":{"name":"Studia Geophysica et Geodaetica","volume":"64 1","pages":"143 - 152"},"PeriodicalIF":0.9,"publicationDate":"2020-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/s11200-019-1217-4","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"4420692","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-20DOI: 10.1007/s11200-019-1154-2
Mariano Fagre, Bruno S. Zossi, Erdal Yiğit, Hagay Amit, Ana G. Elias
{"title":"High frequency sky wave propagation during geomagnetic field reversals","authors":"Mariano Fagre, Bruno S. Zossi, Erdal Yiğit, Hagay Amit, Ana G. Elias","doi":"10.1007/s11200-019-1154-2","DOIUrl":"https://doi.org/10.1007/s11200-019-1154-2","url":null,"abstract":"","PeriodicalId":22001,"journal":{"name":"Studia Geophysica et Geodaetica","volume":"64 1","pages":"130 - 142"},"PeriodicalIF":0.9,"publicationDate":"2019-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/s11200-019-1154-2","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"4782116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-05DOI: 10.1007/s11200-019-1067-0
Majid Abrehdary, Lars E. Sjöberg
Isostasy is a key concept in geoscience in interpreting the state of mass balance between the Earth’s lithosphere and viscous asthenosphere. A more satisfactory test of isostasy is to determine the depth to and density contrast between crust and mantle at the Moho discontinuity (Moho). Generally, the Moho can be mapped by seismic information, but the limited coverage of such data over large portions of the world (in particular at seas) and economic considerations make a combined gravimetric-seismic method a more realistic approach. The determination of a high-resolution of the Moho constituents for marine areas requires the combination of gravimetric and seismic data to diminish substantially the seismic data gaps. In this study, we estimate the Moho constituents globally for ocean regions to a resolution of 1° × 1° by applying the Vening Meinesz-Moritz method from gravimetric data and combine it with estimates derived from seismic data in a new model named COMHV19. The data files of GMG14 satellite altimetry-derived marine gravity field, the Earth2014 Earth topographic/bathymetric model, CRUST1.0 and CRUST19 crustal seismic models are used in a least-squares procedure. The numerical computations show that the Moho depths range from 7.3 km (in Kolbeinsey Ridge) to 52.6 km (in the Gulf of Bothnia) with a global average of 16.4 km and standard deviation of the order of 7.5 km. Estimated Moho density contrasts vary between 20 kg m-3 (north of Iceland) to 570 kg m-3 (in Baltic Sea), with a global average of 313.7 kg m-3 and standard deviation of the order of 77.4 kg m-3. When comparing the computed Moho depths with current knowledge of crustal structure, they are generally found to be in good agreement with other crustal models. However, in certain regions, such as oceanic spreading ridges and hot spots, we generally obtain thinner crust than proposed by other models, which is likely the result of improvements in the new model. We also see evidence for thickening of oceanic crust with increasing age. Hence, the new combined Moho model is able to image rather reliable information in most of the oceanic areas, in particular in ocean ridges, which are important features in ocean basins.
均衡说是地球科学中解释地球岩石圈和粘性软流圈之间物质平衡状态的一个关键概念。一个比较令人满意的地壳均衡测试是确定莫霍不连续(Moho)的地壳和地幔的深度和密度对比。一般来说,莫霍河可以通过地震信息来绘制,但是这种数据在世界大部分地区(特别是在海上)的覆盖范围有限,再加上经济方面的考虑,使得重力-地震相结合的方法成为更现实的方法。确定海洋区域的高分辨率莫霍成分需要将重力和地震数据结合起来,以大大减少地震数据的差距。在这项研究中,我们采用Vening Meinesz-Moritz方法对全球海洋区域的莫霍成分进行了1°× 1°的分辨率估计,并将其与新模型COMHV19中地震数据的估计相结合。采用最小二乘法分析了GMG14卫星测高海洋重力场数据文件、Earth - 2014地球地形/水深模型、甲壳1.0和甲壳19地壳地震模型。数值计算结果表明,莫霍深度范围为7.3 km(科尔拜因西海岭)~ 52.6 km(波黑湾),全球平均深度为16.4 km,标准差为7.5 km。估计的莫霍密度差异从20 kg m-3(冰岛北部)到570 kg m-3(波罗的海)不等,全球平均值为313.7 kg m-3,标准偏差为77.4 kg m-3。将计算得到的莫霍深度与目前已知的地壳结构进行比较,通常发现它们与其他地壳模型吻合得很好。然而,在某些区域,如海洋扩张脊和热点,我们通常得到比其他模式所建议的更薄的地壳,这可能是新模式改进的结果。我们也看到了海洋地壳随着年龄增长而变厚的证据。因此,新的组合Moho模式能够在大多数海洋区域,特别是在洋脊这一海洋盆地的重要特征上成像相当可靠的信息。
{"title":"Estimating a combined Moho model for marine areas via satellite altimetric - gravity and seismic crustal models","authors":"Majid Abrehdary, Lars E. Sjöberg","doi":"10.1007/s11200-019-1067-0","DOIUrl":"https://doi.org/10.1007/s11200-019-1067-0","url":null,"abstract":"<p>Isostasy is a key concept in geoscience in interpreting the state of mass balance between the Earth’s lithosphere and viscous asthenosphere. A more satisfactory test of isostasy is to determine the depth to and density contrast between crust and mantle at the Moho discontinuity (Moho). Generally, the Moho can be mapped by seismic information, but the limited coverage of such data over large portions of the world (in particular at seas) and economic considerations make a combined gravimetric-seismic method a more realistic approach. The determination of a high-resolution of the Moho constituents for marine areas requires the combination of gravimetric and seismic data to diminish substantially the seismic data gaps. In this study, we estimate the Moho constituents globally for ocean regions to a resolution of 1<b>°</b> × 1° by applying the Vening Meinesz-Moritz method from gravimetric data and combine it with estimates derived from seismic data in a new model named COMHV19. The data files of GMG14 satellite altimetry-derived marine gravity field, the Earth2014 Earth topographic/bathymetric model, CRUST1.0 and CRUST19 crustal seismic models are used in a least-squares procedure. The numerical computations show that the Moho depths range from 7.3 km (in Kolbeinsey Ridge) to 52.6 km (in the Gulf of Bothnia) with a global average of 16.4 km and standard deviation of the order of 7.5 km. Estimated Moho density contrasts vary between 20 kg m<sup>-3</sup> (north of Iceland) to 570 kg m<sup>-3</sup> (in Baltic Sea), with a global average of 313.7 kg m<sup>-3</sup> and standard deviation of the order of 77.4 kg m<sup>-3</sup>. When comparing the computed Moho depths with current knowledge of crustal structure, they are generally found to be in good agreement with other crustal models. However, in certain regions, such as oceanic spreading ridges and hot spots, we generally obtain thinner crust than proposed by other models, which is likely the result of improvements in the new model. We also see evidence for thickening of oceanic crust with increasing age. Hence, the new combined Moho model is able to image rather reliable information in most of the oceanic areas, in particular in ocean ridges, which are important features in ocean basins.</p>","PeriodicalId":22001,"journal":{"name":"Studia Geophysica et Geodaetica","volume":"64 1","pages":"1 - 25"},"PeriodicalIF":0.9,"publicationDate":"2019-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1007/s11200-019-1067-0","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"4203469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}