Pub Date : 2024-05-31DOI: 10.1007/s00024-024-03501-4
Win Thu Zar, Jacques Lochard, Martin B. Kalinowski, Andrew Collinson, Thierry Schneider
In the event of an on-site inspection (OSI) under the Comprehensive Nuclear-Test-Ban Treaty (CTBT), inspectors and support staffs of the inspected state party may encounter situations presenting similarities to those resulting from past radiation emergencies. The Chernobyl and Fukushima nuclear power plant accidents have shown that the so-called “co-expertise” process recommended by the International Commission on Radiological Protection (ICRP) is an effective lever for empowering the affected populations so that they can take informed decisions concerning their own protection. After a reminder of the constituent elements of the co-expertise approach as well as the context of the CTBT inspections the article describes how some key elements of the co-expertise process could be incorporated in the training program of the surrogate on-site inspectors and inspection teams to address possible concerns regarding the consequences ofradiological contamination if present.
{"title":"What On-site Inspectors Under the Comprehensive Nuclear-Test-Ban Treaty Can Learn from The “Co-expertise Process” Experiences Implemented After the Chernobyl and Fukushima Nuclear Power Plant Accidents?","authors":"Win Thu Zar, Jacques Lochard, Martin B. Kalinowski, Andrew Collinson, Thierry Schneider","doi":"10.1007/s00024-024-03501-4","DOIUrl":"https://doi.org/10.1007/s00024-024-03501-4","url":null,"abstract":"<p>In the event of an on-site inspection (OSI) under the Comprehensive Nuclear-Test-Ban Treaty (CTBT), inspectors and support staffs of the inspected state party may encounter situations presenting similarities to those resulting from past radiation emergencies. The Chernobyl and Fukushima nuclear power plant accidents have shown that the so-called “co-expertise” process recommended by the International Commission on Radiological Protection (ICRP) is an effective lever for empowering the affected populations so that they can take informed decisions concerning their own protection. After a reminder of the constituent elements of the co-expertise approach as well as the context of the CTBT inspections the article describes how some key elements of the co-expertise process could be incorporated in the training program of the surrogate on-site inspectors and inspection teams to address possible concerns regarding the consequences ofradiological contamination if present.</p>","PeriodicalId":21078,"journal":{"name":"pure and applied geophysics","volume":"56 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141193292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-31DOI: 10.1007/s00024-024-03514-z
Shahram Angardi, Ramin Vafaei Poursorkhabi, Ahmad Zarean Shirvanehdeh, Rouzbeh Dabiri
Adequate estimation of S-wave velocity (Vs) structure is a significant parameter in the seismic micro zonation studies. To this purpose, different techniques, such as down-hole measurements and inversion of surface wave’s dispersion curves are proposed for modeling VS profile. In the last decade, modeling Vs profile from the Rayleigh wave’s ellipticity curve (H/V) is more applicable owing to its rapid and simple data gathering procedure. However, regarding the ambiguities in the inversion of H/V curves, obtaining the reliable results priori information, such as down-hole measurement, to constrain the final Vs model is vital. This study addressed this challenge, and based on a hybrid artificial intelligence method introduced a new technique to invert the Rayleigh wave ellipticity curve with acceptable performance. To do that, first model parameters (i.e. number of layers and corresponding thicknesses and shear wave velocities) by the ensemble of neural networks (ENN) were predicted, and then further inversion by jellyfish searching (JS) algorithm (named ENN-JS inversion method) was carried out to obtain a more reasonable Vs model. To build the ensemble system, ten base networks were arranged. To train the neural networks, synthetic Rayleigh wave ellipticity data by forward modeling approach were generated. The combination of the outputs of based networks was performed using the averaging method. Then, JS inversion algorithm was applied to estimate the final adequate Vs model. ENNs provide essential information to the JS searching algorithm on the number of layers and proper search spaces for model parameters. Synthetic and actual datasets tested the ENN-JS inversion technique. Findings show the proposed method provides a robust method for the inversion of Rayleigh wave ellipticity data.
充分估计 S 波速度(Vs)结构是地震微区研究中的一个重要参数。为此,人们提出了不同的技术,如井下测量和反演面波频散曲线来模拟 Vs 剖面。在过去的十年中,根据瑞利波的椭圆度曲线(H/V)建立 Vs 剖面模型因其快速和简单的数据收集程序而更加适用。然而,由于 H/V 曲线反演的模糊性,获得可靠的先验结果信息(如井下测量)以约束最终的 Vs 模型至关重要。本研究针对这一挑战,基于混合人工智能方法,引入了一种新技术,以可接受的性能反演瑞利波椭圆度曲线。为此,首先通过神经网络集合(ENN)预测模型参数(即层数和相应厚度以及剪切波速度),然后通过水母搜索(JS)算法(命名为 ENN-JS 反演方法)进行进一步反演,以获得更合理的 Vs 模型。为了建立集合系统,安排了十个基础网络。为训练神经网络,采用前向建模方法生成合成瑞利波椭圆度数据。使用平均法对基础网络的输出进行组合。然后,应用 JS 反演算法估算出最终的适当 Vs 模型。ENN 为 JS 搜索算法提供了关于层数和模型参数适当搜索空间的重要信息。合成数据集和实际数据集测试了 ENN-JS 反演技术。结果表明,所提出的方法为反演瑞利波椭圆度数据提供了一种稳健的方法。
{"title":"Vs Profiling by the Inversion of Rayleigh Wave Ellipticity Curve Using a Hybrid Artificial Intelligence Method","authors":"Shahram Angardi, Ramin Vafaei Poursorkhabi, Ahmad Zarean Shirvanehdeh, Rouzbeh Dabiri","doi":"10.1007/s00024-024-03514-z","DOIUrl":"10.1007/s00024-024-03514-z","url":null,"abstract":"<div><p>Adequate estimation of S-wave velocity (Vs) structure is a significant parameter in the seismic micro zonation studies. To this purpose, different techniques, such as down-hole measurements and inversion of surface wave’s dispersion curves are proposed for modeling V<sub>S</sub> profile. In the last decade, modeling Vs profile from the Rayleigh wave’s ellipticity curve (H/V) is more applicable owing to its rapid and simple data gathering procedure. However, regarding the ambiguities in the inversion of H/V curves, obtaining the reliable results priori information, such as down-hole measurement, to constrain the final Vs model is vital. This study addressed this challenge, and based on a hybrid artificial intelligence method introduced a new technique to invert the Rayleigh wave ellipticity curve with acceptable performance. To do that, first model parameters (i.e. number of layers and corresponding thicknesses and shear wave velocities) by the ensemble of neural networks (ENN) were predicted, and then further inversion by jellyfish searching (JS) algorithm (named ENN-JS inversion method) was carried out to obtain a more reasonable Vs model. To build the ensemble system, ten base networks were arranged. To train the neural networks, synthetic Rayleigh wave ellipticity data by forward modeling approach were generated. The combination of the outputs of based networks was performed using the averaging method. Then, JS inversion algorithm was applied to estimate the final adequate Vs model. ENNs provide essential information to the JS searching algorithm on the number of layers and proper search spaces for model parameters. Synthetic and actual datasets tested the ENN-JS inversion technique. Findings show the proposed method provides a robust method for the inversion of Rayleigh wave ellipticity data.</p></div>","PeriodicalId":21078,"journal":{"name":"pure and applied geophysics","volume":"181 6","pages":"1831 - 1844"},"PeriodicalIF":1.9,"publicationDate":"2024-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141193278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-30DOI: 10.1007/s00024-024-03506-z
Hasan Aldashti, Zaher AlAbadla, Mohamad Magdy Abdel Wahab, Mohamed F. Yassin
The relationship between particulate matter and economic growth, as well as the relationship between economic growth and Greenhouse Gas emissions, has been the topic of considerable investigations over the past two decades. Kuwait has a hot, dry, and desert climate that makes the outside air affected by natural and other unnatural factors. Fine Particulate Matter (PM2.5) samples were monthly collected for a 41-years (from 1980 to 2021) over the state of Kuwait. This study presents a detailed investigation of possible correlation and regression analysis between PM2.5 mass column concentration and socioeconomic factors, and they are as follows: GDP per Capita (GDPP), Greenhouse Gas emissions, and population density during the same period. The correlation between per Capita GDP and PM2.5 concentration is statistically positive and supported at the highest level of significance. The Greenhouse Gas emissions and population density proportion exhibit significant positive effects, demonstrating that these two factors strongly affect PM2.5 pollution. The results of the regression analysis for Kuwait shows a significant positive relationship between GDP per Capita and PM2.5, all of which remained significant at the 1% level. The consequence of the increase in per Capita GDP, according to the results reported in the study, should be an increase in the level of PM2.5 column density and vice versa. A significant positive correlation with a value of 0.8805 was found between Physiological Equivalent Temperature (PET) in extremely hot years and Gross Domestic Product (GDP). Human activities lead to an environmental imbalance, and this will certainly affect future generations, so what is required to do is to feel a moral responsibility towards the environment around us.
{"title":"Impacts of Socioeconomic Development on Fine Particulate Matter (PM2.5) and Human Comfort in the State of Kuwait","authors":"Hasan Aldashti, Zaher AlAbadla, Mohamad Magdy Abdel Wahab, Mohamed F. Yassin","doi":"10.1007/s00024-024-03506-z","DOIUrl":"10.1007/s00024-024-03506-z","url":null,"abstract":"<div><p>The relationship between particulate matter and economic growth, as well as the relationship between economic growth and Greenhouse Gas emissions, has been the topic of considerable investigations over the past two decades. Kuwait has a hot, dry, and desert climate that makes the outside air affected by natural and other unnatural factors. Fine Particulate Matter (PM<sub>2.5</sub>) samples were monthly collected for a 41-years (from 1980 to 2021) over the state of Kuwait. This study presents a detailed investigation of possible correlation and regression analysis between PM<sub>2.5</sub> mass column concentration and socioeconomic factors, and they are as follows: GDP per Capita (GDPP), Greenhouse Gas emissions, and population density during the same period. The correlation between per Capita GDP and PM<sub>2.5</sub> concentration is statistically positive and supported at the highest level of significance. The Greenhouse Gas emissions and population density proportion exhibit significant positive effects, demonstrating that these two factors strongly affect PM<sub>2.5</sub> pollution. The results of the regression analysis for Kuwait shows a significant positive relationship between GDP per Capita and PM<sub>2.5</sub>, all of which remained significant at the 1% level. The consequence of the increase in per Capita GDP, according to the results reported in the study, should be an increase in the level of PM<sub>2.5</sub> column density and vice versa. A significant positive correlation with a value of 0.8805 was found between Physiological Equivalent Temperature (PET) in extremely hot years and Gross Domestic Product (GDP). Human activities lead to an environmental imbalance, and this will certainly affect future generations, so what is required to do is to feel a moral responsibility towards the environment around us.</p></div>","PeriodicalId":21078,"journal":{"name":"pure and applied geophysics","volume":"181 6","pages":"1907 - 1918"},"PeriodicalIF":1.9,"publicationDate":"2024-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141193751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-30DOI: 10.1007/s00024-024-03504-1
Vladimir S. Travkin, Natalia A. Tikhonova, Eugeny A. Zakharchuk
Marine heatwaves (MHWs) are extreme ocean events with prolonged discrete periods of anomalously warm water, that have significant impacts on fisheries, tourism, and marine ecosystems. We identify MHWs as discrete periods (≥ 5 days) when the sea surface temperature exceeds the threshold (90th percentile) of the sea surface temperature distribution for specific calendar days and analyze their main properties in the Baltic Sea for the period 1993−2022. Also, we investigate the main mechanisms of evolution one of the most intense and continuous MHW, observed from October 2000 to March 2001. We use temperature, salinity, mixed layer depth, and current velocity daily data from regional reanalysis of the Baltic Sea (2 nautical mile horizontal resolution, vertical step from 1 m on the surface to 24 m on the bottom). We also use monthly data from global climate reanalysis ECMWF ERA5 (0.25° × 0.25°) and meteorological stations of the Swedish Meteorological and Hydrological Institute. From 40 to 90 MHWs with an average duration and intensity (8−24 days and 1.75−3.25 °C) were detected in various parts of the Baltic Sea during the period 1993−2022. The maximum cumulative values (> 2400 °C days) were observed in the Gotland Basins, the Gulf of Finland, and the Gulf of Riga. The mean intensity and cumulative values of MHWs are stronger in summer (3.6 °C and 740 °C days). A long existence of MHW in the autumn–winter period 2000–2001 was associated with positive air temperature anomalies (> 4 °C) and a sharp weakening of wind speed in the Baltic region.
{"title":"Characteristics of Marine Heatwaves of the Baltic Sea for 1993−2022 and Their Driving Factors","authors":"Vladimir S. Travkin, Natalia A. Tikhonova, Eugeny A. Zakharchuk","doi":"10.1007/s00024-024-03504-1","DOIUrl":"10.1007/s00024-024-03504-1","url":null,"abstract":"<div><p>Marine heatwaves (MHWs) are extreme ocean events with prolonged discrete periods of anomalously warm water, that have significant impacts on fisheries, tourism, and marine ecosystems. We identify MHWs as discrete periods (≥ 5 days) when the sea surface temperature exceeds the threshold (90th percentile) of the sea surface temperature distribution for specific calendar days and analyze their main properties in the Baltic Sea for the period 1993−2022. Also, we investigate the main mechanisms of evolution one of the most intense and continuous MHW, observed from October 2000 to March 2001. We use temperature, salinity, mixed layer depth, and current velocity daily data from regional reanalysis of the Baltic Sea (2 nautical mile horizontal resolution, vertical step from 1 m on the surface to 24 m on the bottom). We also use monthly data from global climate reanalysis ECMWF ERA5 (0.25° × 0.25°) and meteorological stations of the Swedish Meteorological and Hydrological Institute. From 40 to 90 MHWs with an average duration and intensity (8−24 days and 1.75−3.25 °C) were detected in various parts of the Baltic Sea during the period 1993−2022. The maximum cumulative values (> 2400 °C days) were observed in the Gotland Basins, the Gulf of Finland, and the Gulf of Riga. The mean intensity and cumulative values of MHWs are stronger in summer (3.6 °C and 740 °C days). A long existence of MHW in the autumn–winter period 2000–2001 was associated with positive air temperature anomalies (> 4 °C) and a sharp weakening of wind speed in the Baltic region.</p></div>","PeriodicalId":21078,"journal":{"name":"pure and applied geophysics","volume":"181 7","pages":"2373 - 2387"},"PeriodicalIF":1.9,"publicationDate":"2024-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141193170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-30DOI: 10.1007/s00024-024-03502-3
Igor A. Beresnev
Kinematic simulations of strong ground motions require representation of the temporal functional form of fault slip. There is a range of source time functions that are commonly used: those that are generalized from numerical simulations of crack dynamics or those that radiate the seismic spectra of the omega-n type. All are physical plausible, while the modern source-inversion studies are still unable to better constrain the choices available. The uncertainty in the kinematically simulated motions due to the ambiguity in assigning an underlining form of fault slip still requires rigorous quantification. The representation integral of elasticity is an appropriate analytical tool, providing the exact seismic field in the entire practically relevant frequency band and including all near- and far-field terms. The smooth dynamically compatible version of the source time function, in which the rise time is the governing parameter, has the drawback of implicitly leading to unreasonably high slip rates and, as a consequence, unrealistically extreme ground velocities and accelerations. On the other hand, the functions, both dynamic and of the omega-n type, in which the static offset U and peak rate of slip vmax are the two independent controlling parameters, all provide nearly the same peak-motion values that match the prescribed, realistically observed coseismic fault-slip rates. With U and vmax as the correctly prescribed slip parameters, respectively controlling the low- and high-frequency ends of the radiated spectra, the choice between a dynamic or omega-n function leads to insignificant differences in radiation, causing the uncertainty in peak motions not exceeding approximately 10%.
强地面运动的运动学模拟需要表示断层滑移的时间函数形式。目前常用的震源时间函数有:从裂缝动力学数值模拟中归纳出来的函数,或辐射Ω-n 型地震频谱的函数。所有这些在物理上都是可信的,而现代震源反演研究仍无法更好地约束现有的选择。由于在确定断层滑动的基本形式时存在模糊性,因此运动模拟运动的不确定性仍需严格量化。弹性表示积分是一种合适的分析工具,可提供整个实际相关频带的精确地震场,并包括所有近场和远场项。光滑的动态兼容型震源时间函数(其中上升时间是控制参数)的缺点是隐含地导致不合理的高滑移率,并因此导致不切实际的极端地面速度和加速度。另一方面,以静态偏移 U 和峰值滑移率 vmax 为两个独立控制参数的动态函数和欧米茄-n 型函数,都提供了几乎相同的峰值运动值,与规定的、实际观测到的同震断层滑移率相吻合。由于 U 和 vmax 是正确规定的滑动参数,分别控制着辐射频谱的低频和高频端,因此选择动态函数还是欧米茄-n 函数导致的辐射差异不大,从而使峰值运动的不确定性不超过约 10%。
{"title":"Choices of Slip Function and Simulated Ground Motions","authors":"Igor A. Beresnev","doi":"10.1007/s00024-024-03502-3","DOIUrl":"10.1007/s00024-024-03502-3","url":null,"abstract":"<div><p>Kinematic simulations of strong ground motions require representation of the temporal functional form of fault slip. There is a range of source time functions that are commonly used: those that are generalized from numerical simulations of crack dynamics or those that radiate the seismic spectra of the omega-<i>n</i> type. All are physical plausible, while the modern source-inversion studies are still unable to better constrain the choices available. The uncertainty in the kinematically simulated motions due to the ambiguity in assigning an underlining form of fault slip still requires rigorous quantification. The representation integral of elasticity is an appropriate analytical tool, providing the exact seismic field in the entire practically relevant frequency band and including all near- and far-field terms. The smooth dynamically compatible version of the source time function, in which the rise time is the governing parameter, has the drawback of implicitly leading to unreasonably high slip rates and, as a consequence, unrealistically extreme ground velocities and accelerations. On the other hand, the functions, both dynamic and of the omega-<i>n</i> type, in which the static offset <i>U</i> and peak rate of slip <i>v</i><sub>max</sub> are the two independent controlling parameters, all provide nearly the same peak-motion values that match the prescribed, realistically observed coseismic fault-slip rates. With <i>U</i> and <i>v</i><sub>max</sub> as the correctly prescribed slip parameters, respectively controlling the low- and high-frequency ends of the radiated spectra, the choice between a dynamic or omega-<i>n</i> function leads to insignificant differences in radiation, causing the uncertainty in peak motions not exceeding approximately 10%.</p></div>","PeriodicalId":21078,"journal":{"name":"pure and applied geophysics","volume":"181 6","pages":"1859 - 1869"},"PeriodicalIF":1.9,"publicationDate":"2024-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141193173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bedrock mapping is essential for understanding seismic amplification, particularly in sediment-filled valleys or basins. However, this can be hard in urban environments. We conducted a geophysical investigation of the sediment-filled Bolzano basin in Northern Italy, where three valleys converge. This study uses low-impact, single-station geophysical methods suitable for urban areas to address the challenges of mapping in such environments. A dataset of 574 microtremor and gravity measurements, along with three seismic reflection lines, allows for the inference of the basin’s deep bedrock morphology, even without direct stratigraphic data. The dataset facilitates a detailed analysis of the spatial patterns of resonance frequencies and amplitudes, revealing 1D and 2D characteristics of the resonances. Notably, 2D resonances predominate along the Adige valley, i.e., the deepest part of the basin with depths up to 900 m. These 2D resonances, which cannot be interpreted through simple 1D frequency-depth relationships, are better understood by integrating gravity data to develop a depth model. The study identifies resonance frequencies ranging from 0.27 to over 3 Hz in Bolzano, affecting different building types during earthquakes. Maximum resonance amplitudes occur at lower frequencies, specifically at 2D resonance sites, therefore primarily impacting high structures. The 2D resonances are directional, with the most significant amplification occurring longitudinally along the valley axes. The resulting 3D bedrock model aids in seismic site response modeling, hydrogeological studies, and geothermal exploration and provides insights into the geological history of the basin, highlighting the role of the Adige Valley as a major drainage pathway.
{"title":"Geophysical Investigation and 3D Modeling of Bedrock Morphology in an Urban Sediment-Filled Basin: The Case of Bolzano (Northern Italy)","authors":"Sgattoni Giulia, Morelli Corrado, Lattanzi Giovanni, Castellaro Silvia, Cucato Maurizio, Chwatal Werner, Mair Volkmar","doi":"10.1007/s00024-024-03512-1","DOIUrl":"10.1007/s00024-024-03512-1","url":null,"abstract":"<div><p>Bedrock mapping is essential for understanding seismic amplification, particularly in sediment-filled valleys or basins. However, this can be hard in urban environments. We conducted a geophysical investigation of the sediment-filled Bolzano basin in Northern Italy, where three valleys converge. This study uses low-impact, single-station geophysical methods suitable for urban areas to address the challenges of mapping in such environments. A dataset of 574 microtremor and gravity measurements, along with three seismic reflection lines, allows for the inference of the basin’s deep bedrock morphology, even without direct stratigraphic data. The dataset facilitates a detailed analysis of the spatial patterns of resonance frequencies and amplitudes, revealing 1D and 2D characteristics of the resonances. Notably, 2D resonances predominate along the Adige valley, i.e., the deepest part of the basin with depths up to 900 m. These 2D resonances, which cannot be interpreted through simple 1D frequency-depth relationships, are better understood by integrating gravity data to develop a depth model. The study identifies resonance frequencies ranging from 0.27 to over 3 Hz in Bolzano, affecting different building types during earthquakes. Maximum resonance amplitudes occur at lower frequencies, specifically at 2D resonance sites, therefore primarily impacting high structures. The 2D resonances are directional, with the most significant amplification occurring longitudinally along the valley axes. The resulting 3D bedrock model aids in seismic site response modeling, hydrogeological studies, and geothermal exploration and provides insights into the geological history of the basin, highlighting the role of the Adige Valley as a major drainage pathway.</p></div>","PeriodicalId":21078,"journal":{"name":"pure and applied geophysics","volume":"181 6","pages":"1871 - 1893"},"PeriodicalIF":1.9,"publicationDate":"2024-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00024-024-03512-1.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141193091","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-29DOI: 10.1007/s00024-024-03508-x
Sadaf Ahmadnejad, Mehdi Nadi, Pouya Aghelpour
The present study was designed to provide a model for surface soil moisture numerical estimation. This assessment is done based on the direct ground measurement of soil moisture in 5 cm (SM5) and 10 cm (SM10) depths using machine learning models. To do this, various meteorological variables (16 variables) were used as model inputs. The data were evaluated on a daily scale during 2017–2020. Of these data, 75% of days were randomly considered as train and 25% as test. The components relevant to air and soil temperature, relative air humidity, evaporation, and vapor pressure are the most important factors that affect daily soil moisture. A mixture of these variables is used as model input. For this purpose, two machine learning models, including a multilayer perceptron (MLP) neural network and an adaptive neuro-fuzzy inference system (ANFIS) were used. Three agriculture meteoritical stations located in three different climates were assessed: (1) Gharakhil Station (semi-humid and moderate), Zarghan Station (semi-arid and cold), and Zahak Station (extra-arid and moderate). According to the comparison between estimates and measurements, both models had a relatively desired performance in Gharakhil and Zarghan (57% < R2 < 66% for SM5 and 45% < R2 < 58% for SM10). However, the performances were weak and almost unacceptable in the extra-arid Zahak climate (14% < R2 < 17% for SM5 and 18% < R2 < 22% for SM10). According to the relative root mean square error (RRMSE) and Nash–Sutcliffe value of stations, the models in humid climates, performed better than those in arid and extra-arid climates. The best RRMSE value was obtained by ANFIS in Gharakhil Stations (0.193 for SM5 and 0.178 for SM10), while the weakest RRMSE value was obtained in Zahak Station, which equaled 0.887 (via MLP) and 0.767 (via ANFIS) for SM5 and SM10, respectively. The applied models were not superior to each other; however, the ANFIS model was slightly superior to MLP in most cases.
{"title":"Numerical Estimation of Surface Soil Moisture by Machine Learning Algorithms in Different Climatic Types","authors":"Sadaf Ahmadnejad, Mehdi Nadi, Pouya Aghelpour","doi":"10.1007/s00024-024-03508-x","DOIUrl":"10.1007/s00024-024-03508-x","url":null,"abstract":"<div><p>The present study was designed to provide a model for surface soil moisture numerical estimation. This assessment is done based on the direct ground measurement of soil moisture in 5 cm (SM5) and 10 cm (SM10) depths using machine learning models. To do this, various meteorological variables (16 variables) were used as model inputs. The data were evaluated on a daily scale during 2017–2020. Of these data, 75% of days were randomly considered as train and 25% as test. The components relevant to air and soil temperature, relative air humidity, evaporation, and vapor pressure are the most important factors that affect daily soil moisture. A mixture of these variables is used as model input. For this purpose, two machine learning models, including a multilayer perceptron (MLP) neural network and an adaptive neuro-fuzzy inference system (ANFIS) were used. Three agriculture meteoritical stations located in three different climates were assessed: (1) Gharakhil Station (semi-humid and moderate), Zarghan Station (semi-arid and cold), and Zahak Station (extra-arid and moderate). According to the comparison between estimates and measurements, both models had a relatively desired performance in Gharakhil and Zarghan (57% < R2 < 66% for SM5 and 45% < R2 < 58% for SM10). However, the performances were weak and almost unacceptable in the extra-arid Zahak climate (14% < R2 < 17% for SM5 and 18% < R2 < 22% for SM10). According to the relative root mean square error (RRMSE) and Nash–Sutcliffe value of stations, the models in humid climates, performed better than those in arid and extra-arid climates. The best RRMSE value was obtained by ANFIS in Gharakhil Stations (0.193 for SM5 and 0.178 for SM10), while the weakest RRMSE value was obtained in Zahak Station, which equaled 0.887 (via MLP) and 0.767 (via ANFIS) for SM5 and SM10, respectively. The applied models were not superior to each other; however, the ANFIS model was slightly superior to MLP in most cases.</p></div>","PeriodicalId":21078,"journal":{"name":"pure and applied geophysics","volume":"181 7","pages":"2149 - 2175"},"PeriodicalIF":1.9,"publicationDate":"2024-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141193165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-28DOI: 10.1007/s00024-024-03498-w
Arun Gandhi, István Geresdi, András Zénó Gyöngyösi, Ágoston Vilmos Tordai, Péter Torma, András Rehak, Mariann Bíró-Szilágyi, Gyula Horvath, Zita Ferenczi, Kornélia Imre, István Lázár, András Peterka, Tamás Weidinger
A micrometeorological fog experiment was carried out in Budapest, Hungary during the winter half year of 2020–2021. The field observation involved (i) standard meteorological and radiosonde measurements; (ii) surface radiation balance and energy budget components, and (iii) ceilometer measurements. 23 fog events occurred during the whole campaign. Foggy events were categorized based on two different methods suggested by Tardif and Rasmussen (2007) and Lin et al. (2022). Using the Present Weather Detector and Visibility sensor (PWD12), duration of foggy periods are approximately shorter (~ 9%) compared to ceilometer measurements. The categorization of fog based on two different methods suggests that duration of radiation fogs is lower compared to that of cloud base lowering (CBL) fogs. The results of analysis of observed data about the longest fog event suggest that (i) it was a radiation fog that developed from the surface upwards with condition of a near neutral temperature profile. Near the surface the turbulent kinetic energy and turbulent momentum fluxes remained smaller than 0.4 m2 s–2 and 0.06 kg m–1 s–2, respectively. In the surface layer the vertical profile of the sensible heat flux was near constant (it changes with height ~ 10%), and during the evolution of the fog, its maximum value was smaller than 25 W m–2, (ii) the dissipation of the fog occurred due to increase of turbulence, (iii) longwave energy budget was close to zero during fog, and a significant increase of virtual potential temperature with height was observed before fog onset. The complete dataset gives an opportunity to quantify local effects, such as tracking the effect of strengthening of wind for modification of stability, surface layer profiles and visibility. Fog formation, development and dissipation are quantified based on the micrometeorological observations performed in suburb area of Budapest, providing a processing algorithm for investigating various fog events for synoptic analysis and for optimization of numerical model parameterizations.
{"title":"An Observational Case Study of a Radiation Fog Event","authors":"Arun Gandhi, István Geresdi, András Zénó Gyöngyösi, Ágoston Vilmos Tordai, Péter Torma, András Rehak, Mariann Bíró-Szilágyi, Gyula Horvath, Zita Ferenczi, Kornélia Imre, István Lázár, András Peterka, Tamás Weidinger","doi":"10.1007/s00024-024-03498-w","DOIUrl":"10.1007/s00024-024-03498-w","url":null,"abstract":"<div><p>A micrometeorological fog experiment was carried out in Budapest, Hungary during the winter half year of 2020–2021. The field observation involved (i) standard meteorological and radiosonde measurements; (ii) surface radiation balance and energy budget components, and (iii) ceilometer measurements. 23 fog events occurred during the whole campaign. Foggy events were categorized based on two different methods suggested by Tardif and Rasmussen (2007) and Lin et al. (2022). Using the Present Weather Detector and Visibility sensor (PWD12), duration of foggy periods are approximately shorter (~ 9%) compared to ceilometer measurements. The categorization of fog based on two different methods suggests that duration of radiation fogs is lower compared to that of cloud base lowering (CBL) fogs. The results of analysis of observed data about the longest fog event suggest that (i) it was a radiation fog that developed from the surface upwards with condition of a near neutral temperature profile. Near the surface the turbulent kinetic energy and turbulent momentum fluxes remained smaller than 0.4 m<sup>2</sup> s<sup>–2</sup> and 0.06 kg m<sup>–1</sup> s<sup>–2</sup>, respectively. In the surface layer the vertical profile of the sensible heat flux was near constant (it changes with height ~ 10%), and during the evolution of the fog, its maximum value was smaller than 25 W m<sup>–2</sup>, (ii) the dissipation of the fog occurred due to increase of turbulence, (iii) longwave energy budget was close to zero during fog, and a significant increase of virtual potential temperature with height was observed before fog onset. The complete dataset gives an opportunity to quantify local effects, such as tracking the effect of strengthening of wind for modification of stability, surface layer profiles and visibility. Fog formation, development and dissipation are quantified based on the micrometeorological observations performed in suburb area of Budapest, providing a processing algorithm for investigating various fog events for synoptic analysis and for optimization of numerical model parameterizations.</p></div>","PeriodicalId":21078,"journal":{"name":"pure and applied geophysics","volume":"181 6","pages":"2025 - 2049"},"PeriodicalIF":1.9,"publicationDate":"2024-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00024-024-03498-w.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141168063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-27DOI: 10.1007/s00024-024-03488-y
M. A. Goodwin, A. Petts, B. D. Milbrath, A. Ringbom, D. L. Chester, T. W. Bowyer, J. L. Burnett, J. Friese, L. Lidey, J. C. Hayes, P. W. Eslinger, M. Mayer, D. Keller, R. Sarathi, C. Johnson, M. Aldener, S. Liljegren, T. Fritioff, J. Kastlander, S. J. Leadbetter
Radionuclides are monitored in the atmosphere for the signatures of nuclear explosions, as part of the verification of the Comprehensive Nuclear-Test-Ban Treaty (CTBT). Civil nuclear facilities, such as Nuclear Power Plants (NPPs) and Isotope Production Facilities (IPFs) are sources of anthropogenic radionuclides in the atmosphere and these signatures are sometimes indistinguishable to those of a nuclear explosion. In order to improve the understanding of civil radionuclide-emitting facilities and their impact on the International Monitoring System (IMS) of the CTBT, a group of scientists from the UK, US and Sweden are collaborating with EDF Energy UK to measure radionuclide emissions from an Advanced Gas-cooled Reactor (AGR) nuclear power station. Emissions are being measured at the source, via a stack monitor and high-resolution gamma spectrometry measurements of filters and also at tens of kilometres away via three sensitive radioxenon atmospheric samplers. The timing, isotopic composition, activity magnitudes and other release parameters of interest are investigated, to improve the discrimination between a civil radionuclide release and an explosive nuclear test. This paper outlines the work of the Xenon and Environmental Nuclide Analysis at Hartlepool (XENAH) collaboration, describes the equipment fielded and provides initial results from each measurement campaign.
{"title":"Characterising the Radionuclide Fingerprint of an Advanced Gas-Cooled Nuclear Power Reactor","authors":"M. A. Goodwin, A. Petts, B. D. Milbrath, A. Ringbom, D. L. Chester, T. W. Bowyer, J. L. Burnett, J. Friese, L. Lidey, J. C. Hayes, P. W. Eslinger, M. Mayer, D. Keller, R. Sarathi, C. Johnson, M. Aldener, S. Liljegren, T. Fritioff, J. Kastlander, S. J. Leadbetter","doi":"10.1007/s00024-024-03488-y","DOIUrl":"https://doi.org/10.1007/s00024-024-03488-y","url":null,"abstract":"<p>Radionuclides are monitored in the atmosphere for the signatures of nuclear explosions, as part of the verification of the Comprehensive Nuclear-Test-Ban Treaty (CTBT). Civil nuclear facilities, such as Nuclear Power Plants (NPPs) and Isotope Production Facilities (IPFs) are sources of anthropogenic radionuclides in the atmosphere and these signatures are sometimes indistinguishable to those of a nuclear explosion. In order to improve the understanding of civil radionuclide-emitting facilities and their impact on the International Monitoring System (IMS) of the CTBT, a group of scientists from the UK, US and Sweden are collaborating with EDF Energy UK to measure radionuclide emissions from an Advanced Gas-cooled Reactor (AGR) nuclear power station. Emissions are being measured at the source, via a stack monitor and high-resolution gamma spectrometry measurements of filters and also at tens of kilometres away via three sensitive radioxenon atmospheric samplers. The timing, isotopic composition, activity magnitudes and other release parameters of interest are investigated, to improve the discrimination between a civil radionuclide release and an explosive nuclear test. This paper outlines the work of the Xenon and Environmental Nuclide Analysis at Hartlepool (XENAH) collaboration, describes the equipment fielded and provides initial results from each measurement campaign.</p>","PeriodicalId":21078,"journal":{"name":"pure and applied geophysics","volume":"37 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141168034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-27DOI: 10.1007/s00024-024-03461-9
J. Fernández-Fraile, Maurizio Mattesini, E. Buforn
This study uses a systematic methodology for the re-evaluation and analysis of earthquakes in the first half of the 20th century in Spain, a period with very inhomogeneous information sources. To the best of our knowledge, these earthquakes have never been previously re-evaluated using as many information sources as the collected in this paper. The methodology used in this paper has been tested in SE Spain for further application in the rest of the Iberian Peninsula. We have collected and thoroughly revised all the seismic information and data sources available, ranging from specific reports, macroseismic questionnaires, and seismograms to newspapers and pictures. In addition, for a set of 16 earthquakes between 1900 and 1962 in the selected area, we provide EMS-98 intensities and macroseismic epicenters, except for one that is instrumental. Among the 16 earthquakes, it has only been possible to provide a depth value for eight of them. The seismic intensities have been evaluated using the intensity scale EMS-98, and the epicenters have been located with both instrumental methods (Hypocenter location) and macroseismic methods (such as Bakun, Boxer 4.0 and MEEP 2.0). Our results show that, Imax (maximum seismic intensity) values from the IGN catalogue are larger in more than the half of the revised earthquakes by between a half degree to two-and-a-half degrees, and only for Lorquí earthquake on April 25th, 1912, the Imax was smaller by half a degree. Most of the epicenters were also updated with changes between 1 and 41 km. Focal depths are less than 10 km, but this parameter has large uncertainties. The result of this study is a homogeneous seismic catalog (re-evaluated epicenters and Imax) for the period 1900–1962 that can be compared with periods prior to the 20th century.
{"title":"Re-Evaluation of the Earthquake Catalog for Spain Using the EMS-98 Scale for the Period 1900–1962","authors":"J. Fernández-Fraile, Maurizio Mattesini, E. Buforn","doi":"10.1007/s00024-024-03461-9","DOIUrl":"https://doi.org/10.1007/s00024-024-03461-9","url":null,"abstract":"<p>This study uses a systematic methodology for the re-evaluation and analysis of earthquakes in the first half of the 20th century in Spain, a period with very inhomogeneous information sources. To the best of our knowledge, these earthquakes have never been previously re-evaluated using as many information sources as the collected in this paper. The methodology used in this paper has been tested in SE Spain for further application in the rest of the Iberian Peninsula. We have collected and thoroughly revised all the seismic information and data sources available, ranging from specific reports, macroseismic questionnaires, and seismograms to newspapers and pictures. In addition, for a set of 16 earthquakes between 1900 and 1962 in the selected area, we provide EMS-98 intensities and macroseismic epicenters, except for one that is instrumental. Among the 16 earthquakes, it has only been possible to provide a depth value for eight of them. The seismic intensities have been evaluated using the intensity scale EMS-98, and the epicenters have been located with both instrumental methods (Hypocenter location) and macroseismic methods (such as Bakun, Boxer 4.0 and MEEP 2.0). Our results show that, I<sub>max</sub> (maximum seismic intensity) values from the IGN catalogue are larger in more than the half of the revised earthquakes by between a half degree to two-and-a-half degrees, and only for Lorquí earthquake on April 25th, 1912, the I<sub>max</sub> was smaller by half a degree. Most of the epicenters were also updated with changes between 1 and 41 km. Focal depths are less than 10 km, but this parameter has large uncertainties. The result of this study is a homogeneous seismic catalog (re-evaluated epicenters and I<sub>max</sub>) for the period 1900–1962 that can be compared with periods prior to the 20th century.</p>","PeriodicalId":21078,"journal":{"name":"pure and applied geophysics","volume":"3 1","pages":""},"PeriodicalIF":2.0,"publicationDate":"2024-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141168037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}