John B. Rundle, Ian Baughman, Andrea Donnellan, Lisa Grant Ludwig, Geoffrey C. Fox
Previous papers have outlined nowcasting methods to track the current state of earthquake hazard using only observed seismic catalogs. The basis for one of these methods, the “counting method,” is the Gutenberg-Richter (GR) magnitude-frequency relation. The GR relation states that for every large earthquake of magnitude greater than MT, there are on average NGR small earthquakes of magnitude MS. In this paper we use this basic relation, combined with the Receiver Operating Characteristic (ROC) formalism from machine learning, to compute the probability of a large earthquake. The probability is conditioned on the number of small earthquakes n(t) that have occurred since the last large earthquake. We work in natural time, which is defined as the count of small earthquakes between large earthquakes. We do not need to assume a probability model, which is a major advantage. Instead, the probability is computed as the Positive Predictive Value (PPV) associated with the ROC curve. We find that the PPV following the last large earthquake initially decreases as more small earthquakes occur, indicating the property of temporal clustering of large earthquakes as is observed. As the number of small earthquakes continues to accumulate, the PPV subsequently begins to increase. Eventually a point is reached beyond which the rate of increase becomes much larger and more dramatic. Here we describe and illustrate the method by applying it to a local region around Los Angeles, California, following the 17 January 1994 magnitude M6.7 Northridge earthquake.
{"title":"From Local Earthquake Nowcasting to Natural Time Forecasting: A Simple Do-It-Yourself (DIY) Method","authors":"John B. Rundle, Ian Baughman, Andrea Donnellan, Lisa Grant Ludwig, Geoffrey C. Fox","doi":"10.1029/2025EA004820","DOIUrl":"https://doi.org/10.1029/2025EA004820","url":null,"abstract":"<p>Previous papers have outlined nowcasting methods to track the current state of earthquake hazard using only observed seismic catalogs. The basis for one of these methods, the “counting method,” is the Gutenberg-Richter (GR) magnitude-frequency relation. The GR relation states that for every large earthquake of magnitude greater than <i>M</i><sub><i>T</i></sub>, there are on average <i>N</i><sub><i>GR</i></sub> small earthquakes of magnitude <i>M</i><sub><i>S.</i></sub> In this paper we use this basic relation, combined with the Receiver Operating Characteristic (ROC) formalism from machine learning, to compute the probability of a large earthquake. The probability is conditioned on the number of small earthquakes <i>n</i>(<i>t</i>) that have occurred since the last large earthquake. We work in natural time, which is defined as the count of small earthquakes between large earthquakes. We do not need to assume a probability model, which is a major advantage. Instead, the probability is computed as the Positive Predictive Value (PPV) associated with the ROC curve. We find that the PPV following the last large earthquake initially decreases as more small earthquakes occur, indicating the property of temporal clustering of large earthquakes as is observed. As the number of small earthquakes continues to accumulate, the PPV subsequently begins to increase. Eventually a point is reached beyond which the rate of increase becomes much larger and more dramatic. Here we describe and illustrate the method by applying it to a local region around Los Angeles, California, following the 17 January 1994 magnitude <i>M</i>6.7 Northridge earthquake.</p>","PeriodicalId":54286,"journal":{"name":"Earth and Space Science","volume":"13 1","pages":""},"PeriodicalIF":2.6,"publicationDate":"2025-12-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://agupubs.onlinelibrary.wiley.com/doi/epdf/10.1029/2025EA004820","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145887898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Flores, A. Serrano, G. Sánchez-Hernández, M. A. Obregón, J. M. Vilaplana
The use of CCD-array spectrometers has substantially increased in recent years in many different fields. Although they have numerous advantages over conventional scanning spectrometers, they need to be thoroughly characterized to correct for various sources of error. This study focuses on the experimental characterization of the dark signal of Avantes AvaSpec-2048 CCD-array spectrometers, used to measure solar UV and VIS-NIR radiation. In order to have a large number of measurements of the dark signal at different integration times and temperatures, a ramp methodology has been followed and validated against stabilized-temperature experiments. These data have allowed the analysis of the individual dependencies of the dark signal with integration time and temperature, as well as the proposal of a final multivariate model including both variables. This is one of the first multivariate models proposed for the dark signal of a detector used by a CCD-array spectrometer to measure solar UV radiation. The dependence of the dark signal with integration time and temperature is found to be linear and nonlinear, respectively. The model performs remarkably well, with R2 values above 0.99 and relative root mean squared error around 0.1 and 0.05 for the UV and VIS-NIR spectrometers, respectively. The improvement achieved by using an individual model for each pixel is discussed, obtaining notably better results with this model than when using an average model, as suggested by other authors. This study presents a positive contribution to the characterization of the dark signal from CCD-array spectrometers, and the proposed methodology can be extended to other instruments.
{"title":"Spectral Modeling of the Dark Signal for UV and VIS-NIR AvaSpec-2048 CCD-Array Spectrometers","authors":"A. Flores, A. Serrano, G. Sánchez-Hernández, M. A. Obregón, J. M. Vilaplana","doi":"10.1029/2024EA003815","DOIUrl":"https://doi.org/10.1029/2024EA003815","url":null,"abstract":"<p>The use of CCD-array spectrometers has substantially increased in recent years in many different fields. Although they have numerous advantages over conventional scanning spectrometers, they need to be thoroughly characterized to correct for various sources of error. This study focuses on the experimental characterization of the dark signal of Avantes AvaSpec-2048 CCD-array spectrometers, used to measure solar UV and VIS-NIR radiation. In order to have a large number of measurements of the dark signal at different integration times and temperatures, a ramp methodology has been followed and validated against stabilized-temperature experiments. These data have allowed the analysis of the individual dependencies of the dark signal with integration time and temperature, as well as the proposal of a final multivariate model including both variables. This is one of the first multivariate models proposed for the dark signal of a detector used by a CCD-array spectrometer to measure solar UV radiation. The dependence of the dark signal with integration time and temperature is found to be linear and nonlinear, respectively. The model performs remarkably well, with <i>R</i><sup>2</sup> values above 0.99 and relative root mean squared error around 0.1 and 0.05 for the UV and VIS-NIR spectrometers, respectively. The improvement achieved by using an individual model for each pixel is discussed, obtaining notably better results with this model than when using an average model, as suggested by other authors. This study presents a positive contribution to the characterization of the dark signal from CCD-array spectrometers, and the proposed methodology can be extended to other instruments.</p>","PeriodicalId":54286,"journal":{"name":"Earth and Space Science","volume":"13 1","pages":""},"PeriodicalIF":2.6,"publicationDate":"2025-12-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://agupubs.onlinelibrary.wiley.com/doi/epdf/10.1029/2024EA003815","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145891097","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dong Fang, Jiangjun Ran, Shin-Chan Han, Natthachet Tangdamrongsub, Zhengwen Yan
The Gravity Recovery and Climate Experiment (GRACE) and its successor, GRACE-Follow On, play an important role in monitoring mass transport across the Earth. Compared to spherical harmonic solutions, mass concentration (mascon) solutions offer less signal leakage and a “higher” spatial resolution. How the shapes, sizes, and positions of mascon are parameterized influences the accuracy of the solutions. In this study, we derive a variable-sized mascon solution that enhances spatial resolution in polar regions by considering orbital coverage of satellites. To this end, we present a numerical simulation aimed at evaluating the performance of different parameterizations in the mascon solutions. We demonstrate that using variable-sized mascons reduce parameterization error by up to 17% and improve goodness of fit by up to 34%. The accuracy of signal recovery improves by about 23%, 34%, and 42% for basin scales, respectively, in low-latitude, mid-latitude, and high-latitude zones. When applied to the GRACE (-FO) data, we see the optimized parameterization scheme reduces noise by up to 1.84 cm in the surface mass change time series. Additionally, the optimally parameterized mascon solution help to enhance signal recovery in mid-to-high latitude regions. We discuss and quantify benefits of variable-sized mason parameterizations for surface mass change recovery and suggest the optimal scheme based on the simulation and real data processing. Overall, the optimized parameterization scheme will benefit finer-scale mass change signal recovery for mascon solution.
{"title":"On Optimal Parameterization for Mascon Solution of Surface Mass Changes From GRACE(-FO) Satellite Gravimetry","authors":"Dong Fang, Jiangjun Ran, Shin-Chan Han, Natthachet Tangdamrongsub, Zhengwen Yan","doi":"10.1029/2025EA004645","DOIUrl":"https://doi.org/10.1029/2025EA004645","url":null,"abstract":"<p>The Gravity Recovery and Climate Experiment (GRACE) and its successor, GRACE-Follow On, play an important role in monitoring mass transport across the Earth. Compared to spherical harmonic solutions, mass concentration (mascon) solutions offer less signal leakage and a “higher” spatial resolution. How the shapes, sizes, and positions of mascon are parameterized influences the accuracy of the solutions. In this study, we derive a variable-sized mascon solution that enhances spatial resolution in polar regions by considering orbital coverage of satellites. To this end, we present a numerical simulation aimed at evaluating the performance of different parameterizations in the mascon solutions. We demonstrate that using variable-sized mascons reduce parameterization error by up to 17% and improve goodness of fit by up to 34%. The accuracy of signal recovery improves by about 23%, 34%, and 42% for basin scales, respectively, in low-latitude, mid-latitude, and high-latitude zones. When applied to the GRACE (-FO) data, we see the optimized parameterization scheme reduces noise by up to 1.84 cm in the surface mass change time series. Additionally, the optimally parameterized mascon solution help to enhance signal recovery in mid-to-high latitude regions. We discuss and quantify benefits of variable-sized mason parameterizations for surface mass change recovery and suggest the optimal scheme based on the simulation and real data processing. Overall, the optimized parameterization scheme will benefit finer-scale mass change signal recovery for mascon solution.</p>","PeriodicalId":54286,"journal":{"name":"Earth and Space Science","volume":"13 1","pages":""},"PeriodicalIF":2.6,"publicationDate":"2025-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://agupubs.onlinelibrary.wiley.com/doi/epdf/10.1029/2025EA004645","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145891576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Mohorovičić discontinuity (Moho) marks the boundary between Earth's crust and the underlying mantle, serving as a critical interface for understanding Earth's structure, composition, and geodynamic processes. This study introduces a novel iterative and stable algorithm for global Moho depth inversion. We first derive the gravity disturbance of the Moho interface in the spherical harmonic domain, expressed as a series of spherical harmonic coefficients. These forward expressions are then reformulated into an iterative scheme for Moho depth estimation. To ensure convergence, a damping factor is applied to suppress high-frequency noise, and the process is constrained by observed gravity data to minimize residuals. The algorithm is validated using a synthetic Airy–Heiskanen interface in a closed-loop test. Results show stable convergence within approximately three iterations, yielding minimal gravity residuals (∼0.05 mGal) and small depth errors (standard deviation: 0.07 km), demonstrating the method's high accuracy. A sensitivity analysis of constant and variable Moho density contrasts further shows that when density varies from 450 to 600, the mean difference is less than 1.0 km and the standard deviation is only 1.1 km, indicating that the solution is largely insensitive to density changes. Importantly, incorporating a variable density contrast significantly improves Moho depth recovery along mid-ocean ridges. Finally, the method is applied to refined gravity disturbances that are maximally correlated with Moho depth, successfully recovering global Moho topography. Comparison with the CRUST1.0 seismic Moho model shows strong consistency in both spatial distribution and statistical measures, with depth residuals (standard deviation: 4.23 km) and gravity residuals (∼1.89 mGal), further confirming the robustness of the method. Notably, the use of variable Moho density contrast again provides substantial improvements along mid-ocean ridges.
{"title":"A Novel Iterative Stable Algorithm for Global Moho Modeling in the Spherical Harmonic Domain","authors":"Wenjin Chen, Xiaoyu Tang","doi":"10.1029/2025EA004607","DOIUrl":"https://doi.org/10.1029/2025EA004607","url":null,"abstract":"<p>The Mohorovičić discontinuity (Moho) marks the boundary between Earth's crust and the underlying mantle, serving as a critical interface for understanding Earth's structure, composition, and geodynamic processes. This study introduces a novel iterative and stable algorithm for global Moho depth inversion. We first derive the gravity disturbance of the Moho interface in the spherical harmonic domain, expressed as a series of spherical harmonic coefficients. These forward expressions are then reformulated into an iterative scheme for Moho depth estimation. To ensure convergence, a damping factor is applied to suppress high-frequency noise, and the process is constrained by observed gravity data to minimize residuals. The algorithm is validated using a synthetic Airy–Heiskanen interface in a closed-loop test. Results show stable convergence within approximately three iterations, yielding minimal gravity residuals (∼0.05 mGal) and small depth errors (standard deviation: 0.07 km), demonstrating the method's high accuracy. A sensitivity analysis of constant and variable Moho density contrasts further shows that when density varies from 450 to 600, the mean difference is less than 1.0 km and the standard deviation is only 1.1 km, indicating that the solution is largely insensitive to density changes. Importantly, incorporating a variable density contrast significantly improves Moho depth recovery along mid-ocean ridges. Finally, the method is applied to refined gravity disturbances that are maximally correlated with Moho depth, successfully recovering global Moho topography. Comparison with the CRUST1.0 seismic Moho model shows strong consistency in both spatial distribution and statistical measures, with depth residuals (standard deviation: 4.23 km) and gravity residuals (∼1.89 mGal), further confirming the robustness of the method. Notably, the use of variable Moho density contrast again provides substantial improvements along mid-ocean ridges.</p>","PeriodicalId":54286,"journal":{"name":"Earth and Space Science","volume":"12 12","pages":""},"PeriodicalIF":2.6,"publicationDate":"2025-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://agupubs.onlinelibrary.wiley.com/doi/epdf/10.1029/2025EA004607","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145891551","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. J. Abraham-Alowonle, Amr Hamada, Moataz Abdelwahab, Kanya Kusano, Ayman Mahrous
High-speed solar wind streams (HSS), originating from coronal holes (CH), are key drivers of space weather disturbances and heliospheric dynamics. However, forecasting HSS remains challenging due to the evolving morphology of CH. In this study, we present a deep learning-based framework that models the spatiotemporal relationship between CH and HSS.We applied preprocessing techniques that included the Stonyhurst projection, removal of off-limb structures, transient events, and background noise, thus isolating persistent CH features. We developed two convolutional neural network|convolutional neural networks (CNN) models: one using full-disk extreme ultraviolet images of the sun at 193 Å, 171 Å, and 304 Å wavelengths; the other using binary CH maps derived from 193 Å wavelength. Both models are trained and evaluated across different solar cycle phases using a meta-learning strategy to retain optimal checkpoints based on validation loss. We find that, over the entire solar cycle (SC) period, our model outperforms the benchmark models, achieving a best correlation of