Abstract Seismic waves induced by incident acoustic waves from air disturbances can be used to image near-surface structures. In this article, we analyze seismic waveforms recorded by a dense array on the Xishancun landside in Li County, Sichuan Province, southwest China during the Lunar New Year’s Eve (27 January 2017). A total of eight event clusters have been identified as a result of firework explosions. For each cluster, which comprises dozens of individual events with high similarity, we manually pick arrival times of the first event recorded by the array and locate it with a grid-search method. We then rotate three-component waveforms of all events from the east, north, and vertical coordinate system to the local LQT coordinates (L, positive direction perpendicular to the landslide surface and pointing downwards; Q, positive direction is from the launch location of firework to the station along the landslide surface; T, perpendicular to the plane formed by the L and Q directions, and the selected positive direction of the T axis makes LQT form the left-hand coordinate system), and stack the LQT components for those events with cross-correlation values CC ≥ 0.8 with respect to the first event. Characteristics of the stacked LQT components are also examined. The particle motions at each station are retrograde ellipse in the frequency range of ∼5–50 Hz, suggesting air-coupled Rayleigh waves generated by the firework explosions. Spectrograms of the Rayleigh waves also show clear dispersions, which might be used to image near-surface velocity structures. Although we cannot directly extract the phase velocities due to the limitation of the seismic array, our study shows that the fireworks might provide a low-cost and easy-to-use seismic source for imaging near-surface structures.
{"title":"Fireworks: A Potential Artificial Source for Imaging Near-Surface Structures","authors":"Risheng Chu, Qingdong Wang, Zhigang Peng, Minhan Sheng, Qiaoxia Liu, Haopeng Chen","doi":"10.1785/0220220281","DOIUrl":"https://doi.org/10.1785/0220220281","url":null,"abstract":"Abstract Seismic waves induced by incident acoustic waves from air disturbances can be used to image near-surface structures. In this article, we analyze seismic waveforms recorded by a dense array on the Xishancun landside in Li County, Sichuan Province, southwest China during the Lunar New Year’s Eve (27 January 2017). A total of eight event clusters have been identified as a result of firework explosions. For each cluster, which comprises dozens of individual events with high similarity, we manually pick arrival times of the first event recorded by the array and locate it with a grid-search method. We then rotate three-component waveforms of all events from the east, north, and vertical coordinate system to the local LQT coordinates (L, positive direction perpendicular to the landslide surface and pointing downwards; Q, positive direction is from the launch location of firework to the station along the landslide surface; T, perpendicular to the plane formed by the L and Q directions, and the selected positive direction of the T axis makes LQT form the left-hand coordinate system), and stack the LQT components for those events with cross-correlation values CC ≥ 0.8 with respect to the first event. Characteristics of the stacked LQT components are also examined. The particle motions at each station are retrograde ellipse in the frequency range of ∼5–50 Hz, suggesting air-coupled Rayleigh waves generated by the firework explosions. Spectrograms of the Rayleigh waves also show clear dispersions, which might be used to image near-surface velocity structures. Although we cannot directly extract the phase velocities due to the limitation of the seismic array, our study shows that the fireworks might provide a low-cost and easy-to-use seismic source for imaging near-surface structures.","PeriodicalId":21687,"journal":{"name":"Seismological Research Letters","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135569412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alessandro Damiani, Valerio Poggi, Chiara Scaini, Mohsen Kohrangi, Paolo Bazzurro
Abstract Understanding the potential socioeconomic losses due to natural hazards, such as earthquakes, is of foremost importance in the field of catastrophe risk management. The construction of a probabilistic seismic risk model is complex and requires the tuning of several parameters essential to represent the seismic hazard of the region, the definition of the exposed inventory characteristics, and its vulnerability to ground motion. Because significant uncertainties could be associated with each model component, the loss estimates are often highly volatile. Nevertheless, to reduce the conceptual complexity and the computational burden, in many real-life applications these uncertainties are either not adequately treated or neglected altogether. The false high fidelity of the ensuing loss estimates can mislead decision-making strategies. Hence, it is useful to assess the influence that the variability in the estimated values of the model input parameters may exert on the final risk results and their relevant contributions. To this purpose, we have performed a sensitivity analysis of the results of an urban seismic risk assessment for Isfahan (Iran). Systematic variations have been applied to the values of the parameters that control the earthquake occurrence in the probabilistic seismic hazard model. Curves of input–output relative variations were built for different risk metrics with the goal of identifying the parameters most sensitive to input uncertainty. Our findings can be useful to support risk managers and practitioners in the process of building seismic hazard and risk models. We found that the Gutenberg–Richter a and b values, the maximum magnitude, and the threshold magnitude are large contributors to the variability of important risk measures, such as the 475 yr and the average annual loss, with the more frequent losses being, in general, most sensitive.
{"title":"Impact of the Uncertainty in the Parameters of the Earthquake Occurrence Model on Loss Estimates of Urban Building Portfolios","authors":"Alessandro Damiani, Valerio Poggi, Chiara Scaini, Mohsen Kohrangi, Paolo Bazzurro","doi":"10.1785/0220230248","DOIUrl":"https://doi.org/10.1785/0220230248","url":null,"abstract":"Abstract Understanding the potential socioeconomic losses due to natural hazards, such as earthquakes, is of foremost importance in the field of catastrophe risk management. The construction of a probabilistic seismic risk model is complex and requires the tuning of several parameters essential to represent the seismic hazard of the region, the definition of the exposed inventory characteristics, and its vulnerability to ground motion. Because significant uncertainties could be associated with each model component, the loss estimates are often highly volatile. Nevertheless, to reduce the conceptual complexity and the computational burden, in many real-life applications these uncertainties are either not adequately treated or neglected altogether. The false high fidelity of the ensuing loss estimates can mislead decision-making strategies. Hence, it is useful to assess the influence that the variability in the estimated values of the model input parameters may exert on the final risk results and their relevant contributions. To this purpose, we have performed a sensitivity analysis of the results of an urban seismic risk assessment for Isfahan (Iran). Systematic variations have been applied to the values of the parameters that control the earthquake occurrence in the probabilistic seismic hazard model. Curves of input–output relative variations were built for different risk metrics with the goal of identifying the parameters most sensitive to input uncertainty. Our findings can be useful to support risk managers and practitioners in the process of building seismic hazard and risk models. We found that the Gutenberg–Richter a and b values, the maximum magnitude, and the threshold magnitude are large contributors to the variability of important risk measures, such as the 475 yr and the average annual loss, with the more frequent losses being, in general, most sensitive.","PeriodicalId":21687,"journal":{"name":"Seismological Research Letters","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135569266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shri Krishna Singh, Raúl Daniel Corona-Fernandez, Miguel Ángel Santoyo, Arturo Iglesias
Abstract Repeating large earthquakes (M ≥ 7), waveforms for which are nearly identical, have been identified only on the Mexican subduction thrust near Acapulco. These earthquakes occurred on 1962 (Ms 7.0) and 2021 (Ms 7.0, Mw 7.0). Here, we report on two more sequences of three repeating large earthquakes each in eastern and western Oaxaca, Mexico. The repeating earthquakes in eastern Oaxaca occurred on 23 March 1928 (Ms 7.5), 1965 (Ms 7.6, Mw 7.5), and 2020 (Ms 7.4, Mw 7.4), and in western Oaxaca on 4 August 1928 (Ms 7.4), 1968 (Ms 7.2, Mw 7.3), and 2018 (Ms 7.2, Mw 7.2). Galitzin seismograms of the earthquakes in each sequence at DeBilt, The Netherlands or at Strasbourg, France are strikingly similar for at least 2600 s after the P-wave arrival. Similarity of waveforms of earthquakes in each sequence and tests with seismograms of events locations for which are accurately known suggest that their source areas were less than 10–20 km of each other. Moment-rate functions of these events are remarkably simple. We also document quasi-repeating earthquakes in central Oaxaca on 17 June 1928 (Ms 7.6) and 29 November 1978 (Ms 7.6, Mw 7.6). Such events have similar locations with large overlap in primary slip but are not identical. Recently, Michoacán–Colima earthquakes of 1973 (Ms 7.5, Mw 7.6) and 2022 (Ms 7.6, Mw 7.6) were reported as quasi-repeaters. Repeating or quasi-repeating large earthquakes imply that they are known for all the other events in the sequence if we know the location and gross source parameters of one of them. This permits the estimation of recurrence periods and the delineation of seismic gaps with greater confidence. Repeating and quasi-repeating large earthquakes in Oaxaca, an unique observation, shed new light on seismic hazard of the region, provide further support for the characteristic earthquake model, and reveal remarkably persistent behavior of ruptures through multiple earthquake cycles.
{"title":"Repeating Large Earthquakes along the Mexican Subduction Zone","authors":"Shri Krishna Singh, Raúl Daniel Corona-Fernandez, Miguel Ángel Santoyo, Arturo Iglesias","doi":"10.1785/0220230243","DOIUrl":"https://doi.org/10.1785/0220230243","url":null,"abstract":"Abstract Repeating large earthquakes (M ≥ 7), waveforms for which are nearly identical, have been identified only on the Mexican subduction thrust near Acapulco. These earthquakes occurred on 1962 (Ms 7.0) and 2021 (Ms 7.0, Mw 7.0). Here, we report on two more sequences of three repeating large earthquakes each in eastern and western Oaxaca, Mexico. The repeating earthquakes in eastern Oaxaca occurred on 23 March 1928 (Ms 7.5), 1965 (Ms 7.6, Mw 7.5), and 2020 (Ms 7.4, Mw 7.4), and in western Oaxaca on 4 August 1928 (Ms 7.4), 1968 (Ms 7.2, Mw 7.3), and 2018 (Ms 7.2, Mw 7.2). Galitzin seismograms of the earthquakes in each sequence at DeBilt, The Netherlands or at Strasbourg, France are strikingly similar for at least 2600 s after the P-wave arrival. Similarity of waveforms of earthquakes in each sequence and tests with seismograms of events locations for which are accurately known suggest that their source areas were less than 10–20 km of each other. Moment-rate functions of these events are remarkably simple. We also document quasi-repeating earthquakes in central Oaxaca on 17 June 1928 (Ms 7.6) and 29 November 1978 (Ms 7.6, Mw 7.6). Such events have similar locations with large overlap in primary slip but are not identical. Recently, Michoacán–Colima earthquakes of 1973 (Ms 7.5, Mw 7.6) and 2022 (Ms 7.6, Mw 7.6) were reported as quasi-repeaters. Repeating or quasi-repeating large earthquakes imply that they are known for all the other events in the sequence if we know the location and gross source parameters of one of them. This permits the estimation of recurrence periods and the delineation of seismic gaps with greater confidence. Repeating and quasi-repeating large earthquakes in Oaxaca, an unique observation, shed new light on seismic hazard of the region, provide further support for the characteristic earthquake model, and reveal remarkably persistent behavior of ruptures through multiple earthquake cycles.","PeriodicalId":21687,"journal":{"name":"Seismological Research Letters","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135570122","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yiyu Ni, Marine A. Denolle, Rob Fatland, Naomi Alterman, Bradley P. Lipovsky, Friedrich Knuth
Abstract Large-scale processing and dissemination of distributed acoustic sensing (DAS) data are among the greatest computational challenges and opportunities of seismological research today. Current data formats and computing infrastructure are not well-adapted or user-friendly for large-scale processing. We propose an innovative, cloud-native solution for DAS seismology using the MinIO open-source object storage framework. We develop data schema for cloud-optimized data formats—Zarr and TileDB, which we deploy on a local object storage service compatible with the Amazon Web Services (AWS) storage system. We benchmark reading and writing performance for various data schema using canonical use cases in seismology. We test our framework on a local server and AWS. We find much-improved performance in compute time and memory throughout when using TileDB and Zarr compared to the conventional HDF5 data format. We demonstrate the platform with a computing heavy use case in seismology: ambient noise seismology of DAS data. We process one month of data, pairing all 2089 channels within 24 hr using AWS Batch autoscaling.
分布式声传感(DAS)数据的大规模处理和传播是当今地震研究中最大的计算挑战和机遇之一。当前的数据格式和计算基础设施不能很好地适应大规模处理或对用户友好。我们使用MinIO开源对象存储框架为DAS地震学提出了一种创新的云原生解决方案。我们为云优化的数据格式——zarr和TileDB开发了数据模式,并将其部署在与Amazon Web Services (AWS)存储系统兼容的本地对象存储服务上。我们使用地震学中的规范用例对各种数据模式的读写性能进行基准测试。我们在本地服务器和AWS上测试我们的框架。我们发现,与传统的HDF5数据格式相比,使用TileDB和Zarr在计算时间和内存方面有了很大的提高。我们用一个计算量大的地震学用例来演示该平台:DAS数据的环境噪声地震学。我们处理一个月的数据,使用AWS批处理自动缩放在24小时内配对所有2089个通道。
{"title":"An Object Storage for Distributed Acoustic Sensing","authors":"Yiyu Ni, Marine A. Denolle, Rob Fatland, Naomi Alterman, Bradley P. Lipovsky, Friedrich Knuth","doi":"10.1785/0220230172","DOIUrl":"https://doi.org/10.1785/0220230172","url":null,"abstract":"Abstract Large-scale processing and dissemination of distributed acoustic sensing (DAS) data are among the greatest computational challenges and opportunities of seismological research today. Current data formats and computing infrastructure are not well-adapted or user-friendly for large-scale processing. We propose an innovative, cloud-native solution for DAS seismology using the MinIO open-source object storage framework. We develop data schema for cloud-optimized data formats—Zarr and TileDB, which we deploy on a local object storage service compatible with the Amazon Web Services (AWS) storage system. We benchmark reading and writing performance for various data schema using canonical use cases in seismology. We test our framework on a local server and AWS. We find much-improved performance in compute time and memory throughout when using TileDB and Zarr compared to the conventional HDF5 data format. We demonstrate the platform with a computing heavy use case in seismology: ambient noise seismology of DAS data. We process one month of data, pairing all 2089 channels within 24 hr using AWS Batch autoscaling.","PeriodicalId":21687,"journal":{"name":"Seismological Research Letters","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135569410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jenna L. Faith, Marianne S. Karplus, Stephen A. Veitch, Diane I. Doser, Alexandros Savvaidis
Abstract With increasing earthquakes in the Delaware basin since 2009, earthquake studies, including accurate hypocenters, are critically needed in the Delaware basin to identify the structures producing earthquakes, and to determine if they are related to unconventional petroleum development and production. In 2018, with funding from the Texas Seismological Network, we deployed and maintained a nodal network of 25 Magseis Fairfield Z-Land Generation 2 5-Hz seismic nodes in the Pecos, Texas, region of the Delaware basin, known as, The Pecos Array. The network was deployed from November 2018 to the beginning of January 2020, with an additional two months of data recorded in September and October 2020. The network collected continuous three-component data with a 1000-Hz sampling rate. The spacing of the nodes varied from ∼2 km in town to ∼10 km farther away from the city center. The primary goal of this network was to improve estimation of event hypocenters, which will help to determine why there has been an increase in earthquakes over the past several years. In this article, we summarize the scientific motivation, deployment details, and data quality of this network. Data quality statistics show that we successfully collected continuous data with signal-to-noise ratios that allow us to detect and locate events, hundreds of them being estimated at ML<0.50. This unique dataset is contributing to new seismotectonic studies in the Delaware basin.
{"title":"The Pecos Array: A Temporary Nodal Seismic Experiment in the Pecos, Texas, Region of the Delaware Basin","authors":"Jenna L. Faith, Marianne S. Karplus, Stephen A. Veitch, Diane I. Doser, Alexandros Savvaidis","doi":"10.1785/0220230108","DOIUrl":"https://doi.org/10.1785/0220230108","url":null,"abstract":"Abstract With increasing earthquakes in the Delaware basin since 2009, earthquake studies, including accurate hypocenters, are critically needed in the Delaware basin to identify the structures producing earthquakes, and to determine if they are related to unconventional petroleum development and production. In 2018, with funding from the Texas Seismological Network, we deployed and maintained a nodal network of 25 Magseis Fairfield Z-Land Generation 2 5-Hz seismic nodes in the Pecos, Texas, region of the Delaware basin, known as, The Pecos Array. The network was deployed from November 2018 to the beginning of January 2020, with an additional two months of data recorded in September and October 2020. The network collected continuous three-component data with a 1000-Hz sampling rate. The spacing of the nodes varied from ∼2 km in town to ∼10 km farther away from the city center. The primary goal of this network was to improve estimation of event hypocenters, which will help to determine why there has been an increase in earthquakes over the past several years. In this article, we summarize the scientific motivation, deployment details, and data quality of this network. Data quality statistics show that we successfully collected continuous data with signal-to-noise ratios that allow us to detect and locate events, hundreds of them being estimated at ML&lt;0.50. This unique dataset is contributing to new seismotectonic studies in the Delaware basin.","PeriodicalId":21687,"journal":{"name":"Seismological Research Letters","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135778811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Annemarie Christophersen, Matthew C. Gerstenberger
Abstract The 2022 revision of the New Zealand National Seismic Hazard Model—Te Tauira Matapae Pūmate Rū i Aotearoa (NZ NSHM 2022) is, like other regional and national seismic hazard models, a collection of many component models that are combined via logic trees to calculate various parameters of seismic hazard. Developing, selecting, and combining component models for the NZ NSHM 2022 requires expert judgment. Informal and unstructured use of expert judgment can lead to biases. Drawing on a broad body of literature on potential biases in expert judgment and how to mitigate them, we used three approaches to incorporate expert judgment with the aim to minimize biases and understand uncertainty in seismic hazard results. The first approach applied two closely aligned group structures—the Science Team Working Groups and the Technical Advisory Group (TAG). The groups between them defined the project and made the scientific decisions necessary to produce the final model. Second, the TAG provided the function of a participatory review panel, in which the reviewers of the NSHM were actively engaged throughout the project. The third approach was performance-based weighting of expert assessments, which was applied to the weighting of the logic trees. It involved asking experts so-called calibration questions with known answers, which were relevant to the questions of interest, that is, the logic-tree weights. Each expert provided their best estimates with uncertainty, from which calibration and information scores were calculated. The scores were used to weight the experts’ assessments. The combined approach to incorporating expert judgment was intended to provide a robust and well-reviewed application of seismic hazard analysis for Aotearoa, New Zealand. Robust expert judgment processes are critical to any large science project, and our approach may provide learnings and insights for others.
{"title":"Expert Judgment in the 2022 Aotearoa New Zealand National Seismic Hazard Model","authors":"Annemarie Christophersen, Matthew C. Gerstenberger","doi":"10.1785/0220230250","DOIUrl":"https://doi.org/10.1785/0220230250","url":null,"abstract":"Abstract The 2022 revision of the New Zealand National Seismic Hazard Model—Te Tauira Matapae Pūmate Rū i Aotearoa (NZ NSHM 2022) is, like other regional and national seismic hazard models, a collection of many component models that are combined via logic trees to calculate various parameters of seismic hazard. Developing, selecting, and combining component models for the NZ NSHM 2022 requires expert judgment. Informal and unstructured use of expert judgment can lead to biases. Drawing on a broad body of literature on potential biases in expert judgment and how to mitigate them, we used three approaches to incorporate expert judgment with the aim to minimize biases and understand uncertainty in seismic hazard results. The first approach applied two closely aligned group structures—the Science Team Working Groups and the Technical Advisory Group (TAG). The groups between them defined the project and made the scientific decisions necessary to produce the final model. Second, the TAG provided the function of a participatory review panel, in which the reviewers of the NSHM were actively engaged throughout the project. The third approach was performance-based weighting of expert assessments, which was applied to the weighting of the logic trees. It involved asking experts so-called calibration questions with known answers, which were relevant to the questions of interest, that is, the logic-tree weights. Each expert provided their best estimates with uncertainty, from which calibration and information scores were calculated. The scores were used to weight the experts’ assessments. The combined approach to incorporating expert judgment was intended to provide a robust and well-reviewed application of seismic hazard analysis for Aotearoa, New Zealand. Robust expert judgment processes are critical to any large science project, and our approach may provide learnings and insights for others.","PeriodicalId":21687,"journal":{"name":"Seismological Research Letters","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135778326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Christian Grimm, Sebastian Hainzl, Martin Käser, Helmut Küchenhoff
Abstract The empirical Båth’s law states that the average magnitude difference (ΔM) between a mainshock and its strongest aftershock is ∼1.2, independent of the size of the mainshock. Although this observation can generally be explained by a scaling of aftershock productivity with mainshock magnitude in combination with a Gutenberg–Richter frequency–magnitude distribution, estimates of ΔM may be preferable because they are directly related to the most interesting information, namely the magnitudes of the main events, without relying on assumptions. However, a major challenge in calculating this value is the bias introduced by missing data points when the strongest aftershock is below the observed cut-off magnitude. Ignoring missing values leads to a systematic error because the data points removed are those with particularly large magnitude differences ΔM. The error can be minimized by restricting the statistics to mainshocks that are at least 2 magnitude units above the cutoff, but then the sample size is strongly reduced. This work provides an innovative approach for modeling ΔM by adapting methods for time-to-event data, which often suffer from incomplete observations (censoring). In doing so, we adequately account for unobserved values and estimate a fully parametric distribution of the magnitude differences ΔM for mainshocks in a global earthquake catalog. Our results suggest that magnitude differences are best modeled by the Gompertz distribution and that larger ΔM are expected at increasing depths and higher heat flows.
{"title":"A New Statistical Perspective on Båth’s Law","authors":"Christian Grimm, Sebastian Hainzl, Martin Käser, Helmut Küchenhoff","doi":"10.1785/0220230147","DOIUrl":"https://doi.org/10.1785/0220230147","url":null,"abstract":"Abstract The empirical Båth’s law states that the average magnitude difference (ΔM) between a mainshock and its strongest aftershock is ∼1.2, independent of the size of the mainshock. Although this observation can generally be explained by a scaling of aftershock productivity with mainshock magnitude in combination with a Gutenberg–Richter frequency–magnitude distribution, estimates of ΔM may be preferable because they are directly related to the most interesting information, namely the magnitudes of the main events, without relying on assumptions. However, a major challenge in calculating this value is the bias introduced by missing data points when the strongest aftershock is below the observed cut-off magnitude. Ignoring missing values leads to a systematic error because the data points removed are those with particularly large magnitude differences ΔM. The error can be minimized by restricting the statistics to mainshocks that are at least 2 magnitude units above the cutoff, but then the sample size is strongly reduced. This work provides an innovative approach for modeling ΔM by adapting methods for time-to-event data, which often suffer from incomplete observations (censoring). In doing so, we adequately account for unobserved values and estimate a fully parametric distribution of the magnitude differences ΔM for mainshocks in a global earthquake catalog. Our results suggest that magnitude differences are best modeled by the Gompertz distribution and that larger ΔM are expected at increasing depths and higher heat flows.","PeriodicalId":21687,"journal":{"name":"Seismological Research Letters","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135883713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Automated teleseismic earthquake monitoring is an essential part of global seismicity analysis. Although constraining epicenters in an automated fashion is an established technique, constraining event depths is substantially more difficult. One solution to this challenge is teleseismic depth phases, but these can currently not be identified precisely by automatic detection methods. Here, we propose two deep-learning models, DepthPhaseTEAM and DepthPhaseNet, to detect and pick depth phases. For training the models, we create a dataset based on the ISC-EHB bulletin—a high-quality catalog with detailed phase annotations. We show how backprojecting the predicted phase arrival probability curves onto the depth axis yields accurate estimates of earthquake depth. Furthermore, we show how a multistation model, DepthPhaseTEAM, leads to better and more consistent predictions than the single-station model, DepthPhaseNet. To allow direct application of our models, we integrate them within the SeisBench library.
{"title":"Learning the Deep and the Shallow: Deep-Learning-Based Depth Phase Picking and Earthquake Depth Estimation","authors":"Jannes Münchmeyer, Joachim Saul, Frederik Tilmann","doi":"10.1785/0220230187","DOIUrl":"https://doi.org/10.1785/0220230187","url":null,"abstract":"Abstract Automated teleseismic earthquake monitoring is an essential part of global seismicity analysis. Although constraining epicenters in an automated fashion is an established technique, constraining event depths is substantially more difficult. One solution to this challenge is teleseismic depth phases, but these can currently not be identified precisely by automatic detection methods. Here, we propose two deep-learning models, DepthPhaseTEAM and DepthPhaseNet, to detect and pick depth phases. For training the models, we create a dataset based on the ISC-EHB bulletin—a high-quality catalog with detailed phase annotations. We show how backprojecting the predicted phase arrival probability curves onto the depth axis yields accurate estimates of earthquake depth. Furthermore, we show how a multistation model, DepthPhaseTEAM, leads to better and more consistent predictions than the single-station model, DepthPhaseNet. To allow direct application of our models, we integrate them within the SeisBench library.","PeriodicalId":21687,"journal":{"name":"Seismological Research Letters","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136033641","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Ambient noise tomography has been widely used to estimate the shear-wave velocity structure of the Earth. A key step in this method is to pick dispersions from dispersion spectrograms. Using the frequency–Bessel (F-J) transform, the generated spectrograms can provide more dispersion information by including higher modes in addition to the fundamental mode. With the increasing availability of these spectrograms, manually picking dispersion curves is highly time and energy consuming. Consequently, neural networks have been used for automatically picking dispersions. Dispersion curves are picked based on deep learning mainly for denoising these spectrograms. In several studies, the neural network was solely trained, and its performance was verified for the denoising. However, they all learn single-source data in the training of neural network. It will lead the regionality of trained neural network. Even if we can use domain adaptation to improve its performance and achieve some success, there are still some spectrograms that cannot be solved effectively. Therefore, multisources training is useful and could reduce the regionality in training stage. Normally, dispersion spectrograms from multisources have feature differences of dispersion curves, especially for higher modes in F-J spectrograms. Thus, we propose a training strategy based on domain confusion through which the neural network effectively learns spectrograms from multisources. After domain confusion, the trained neural network can effectively process large number of test data and help us easily obtain more dispersion curves automatically. The proposed study can provide a deep insight into the denoising of dispersion spectrograms by neural network and facilitate ambient noise tomography.
{"title":"Applying Feature Transformation-Based Domain Confusion to Neural Network for the Denoising of Dispersion Spectrograms","authors":"Weibin Song, Shichuan Yuan, Ming Cheng, Guanchao Wang, Yilong Li, Xiaofei Chen","doi":"10.1785/0220230103","DOIUrl":"https://doi.org/10.1785/0220230103","url":null,"abstract":"Abstract Ambient noise tomography has been widely used to estimate the shear-wave velocity structure of the Earth. A key step in this method is to pick dispersions from dispersion spectrograms. Using the frequency–Bessel (F-J) transform, the generated spectrograms can provide more dispersion information by including higher modes in addition to the fundamental mode. With the increasing availability of these spectrograms, manually picking dispersion curves is highly time and energy consuming. Consequently, neural networks have been used for automatically picking dispersions. Dispersion curves are picked based on deep learning mainly for denoising these spectrograms. In several studies, the neural network was solely trained, and its performance was verified for the denoising. However, they all learn single-source data in the training of neural network. It will lead the regionality of trained neural network. Even if we can use domain adaptation to improve its performance and achieve some success, there are still some spectrograms that cannot be solved effectively. Therefore, multisources training is useful and could reduce the regionality in training stage. Normally, dispersion spectrograms from multisources have feature differences of dispersion curves, especially for higher modes in F-J spectrograms. Thus, we propose a training strategy based on domain confusion through which the neural network effectively learns spectrograms from multisources. After domain confusion, the trained neural network can effectively process large number of test data and help us easily obtain more dispersion curves automatically. The proposed study can provide a deep insight into the denoising of dispersion spectrograms by neural network and facilitate ambient noise tomography.","PeriodicalId":21687,"journal":{"name":"Seismological Research Letters","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135992808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"SSA News and Notes","authors":"","doi":"10.1785/0220230316","DOIUrl":"https://doi.org/10.1785/0220230316","url":null,"abstract":"","PeriodicalId":21687,"journal":{"name":"Seismological Research Letters","volume":"88 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135858597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}