Pub Date : 2026-02-01Epub Date: 2025-12-12DOI: 10.1016/j.cageo.2025.106094
Jiashan Wan , Liangjun Wen , Ziheng Jian , Jinhua Wu , Jingyang Li , Mengqi Lian , Kai Wang
Slope deformation is characterized by pronounced time variability and complexity. Although ground-based synthetic aperture radar (GB-SAR) provides high-frequency, broad monitoring, its strong oscillations and large fluctuations can impair predictive performance. To address this, the raw displacement sequence is first smoothed via misaligned subtraction to suppress high-frequency noise and highlight key deformation trends. A dynamic confidence boundary is then established on the inverse-velocity curve to robustly identify the acceleration start point. Building on prior work on physics-informed Kolmogorov–Arnold networks (PIKANs), we apply a PIKANs framework to landslide early warning, embedding the displacement-time evolution constraint into the basis-function space of Kolmogorov–Arnold network (KAN) to unify nonlinear deformation dynamics with governing physical laws. During model training, an alternating optimization scheme combining Adam and the L-BFGS algorithm accelerates convergence and enhances predictive accuracy. Comparative experiments on field GB-SAR datasets demonstrate that compared with an improved KAN baseline and a physics-informed neural network benchmark, PIKANs reduce the relative error in landslide failure-time prediction by 38.42% and 20.44%, respectively. These results confirm that integrating physical equation constraints into neural network parameter updates substantially improves the precision and efficiency of real-time landslide early warning.
{"title":"PIKANs: Physics-informed Kolmogorov–Arnold networks for landslide time-to-failure prediction","authors":"Jiashan Wan , Liangjun Wen , Ziheng Jian , Jinhua Wu , Jingyang Li , Mengqi Lian , Kai Wang","doi":"10.1016/j.cageo.2025.106094","DOIUrl":"10.1016/j.cageo.2025.106094","url":null,"abstract":"<div><div>Slope deformation is characterized by pronounced time variability and complexity. Although ground-based synthetic aperture radar (GB-SAR) provides high-frequency, broad monitoring, its strong oscillations and large fluctuations can impair predictive performance. To address this, the raw displacement sequence is first smoothed via misaligned subtraction to suppress high-frequency noise and highlight key deformation trends. A dynamic confidence boundary is then established on the inverse-velocity curve to robustly identify the acceleration start point. Building on prior work on physics-informed Kolmogorov–Arnold networks (PIKANs), we apply a PIKANs framework to landslide early warning, embedding the displacement-time evolution constraint into the basis-function space of Kolmogorov–Arnold network (KAN) to unify nonlinear deformation dynamics with governing physical laws. During model training, an alternating optimization scheme combining Adam and the L-BFGS algorithm accelerates convergence and enhances predictive accuracy. Comparative experiments on field GB-SAR datasets demonstrate that compared with an improved KAN baseline and a physics-informed neural network benchmark, PIKANs reduce the relative error in landslide failure-time prediction by 38.42% and 20.44%, respectively. These results confirm that integrating physical equation constraints into neural network parameter updates substantially improves the precision and efficiency of real-time landslide early warning.</div></div>","PeriodicalId":55221,"journal":{"name":"Computers & Geosciences","volume":"208 ","pages":"Article 106094"},"PeriodicalIF":4.4,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145791273","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01Epub Date: 2025-09-19DOI: 10.1016/j.cageo.2025.106056
Harikrishnan Nalinakumar , Patrick Makuluni , Juerg Hauser , Stuart R. Clark
The study of sedimentary basins is crucial for understanding Earth’s evolution and geological history. Traditional basin analysis, often constrained by 1D subsidence analysis, limits the spatial understanding of geological processes. This study introduces Stratya2D, a Python-based tool that extends traditional methodologies by extending 1D decompaction and backstripping to a 2D framework allowing for detailed basin analysis. The tool extracts horizon annotations from pre-interpreted seismic images, enabling coordinate-based reconstruction of depositional surfaces. Using advanced image processing techniques, Stratya2D integrates horizon extraction, depth normalisation, and Monte Carlo Simulation (MCS) to quantify uncertainties in tectonic subsidence and layer evolution at each time step, offering a breakthrough in geoscientific analysis. This innovative approach offers a more cost-effective alternative to traditional software and improves prediction reliability. The tool’s effectiveness was validated through comparisons with established literature and specific case studies, including data from the NDI Carrara 1 well in the South Nicholson region, Northern Territory, Australia, along the 17GA-SN1 seismic line. The results closely align with previously published data and PetroMod simulations, accurately replicating the tectonic subsidence curve and offering extended insights into the complex geological context of the South Nicholson Region. Comparative analysis with PetroMod confirms the robustness of Stratya2D, while the inclusion of MCS highlights the critical role of uncertainty quantification in subsurface modelling. Stratya2D offers a robust and versatile tool for regional-scale basin modelling, effectively addressing diverse geoscientific challenges.
{"title":"Stratya2D: Enhancing kinematic backstripping through image-based 2D horizon integration","authors":"Harikrishnan Nalinakumar , Patrick Makuluni , Juerg Hauser , Stuart R. Clark","doi":"10.1016/j.cageo.2025.106056","DOIUrl":"10.1016/j.cageo.2025.106056","url":null,"abstract":"<div><div>The study of sedimentary basins is crucial for understanding Earth’s evolution and geological history. Traditional basin analysis, often constrained by 1D subsidence analysis, limits the spatial understanding of geological processes. This study introduces Stratya2D, a Python-based tool that extends traditional methodologies by extending 1D decompaction and backstripping to a 2D framework allowing for detailed basin analysis. The tool extracts horizon annotations from pre-interpreted seismic images, enabling coordinate-based reconstruction of depositional surfaces. Using advanced image processing techniques, Stratya2D integrates horizon extraction, depth normalisation, and Monte Carlo Simulation (MCS) to quantify uncertainties in tectonic subsidence and layer evolution at each time step, offering a breakthrough in geoscientific analysis. This innovative approach offers a more cost-effective alternative to traditional software and improves prediction reliability. The tool’s effectiveness was validated through comparisons with established literature and specific case studies, including data from the NDI Carrara 1 well in the South Nicholson region, Northern Territory, Australia, along the 17GA-SN1 seismic line. The results closely align with previously published data and PetroMod simulations, accurately replicating the tectonic subsidence curve and offering extended insights into the complex geological context of the South Nicholson Region. Comparative analysis with PetroMod confirms the robustness of Stratya2D, while the inclusion of MCS highlights the critical role of uncertainty quantification in subsurface modelling. Stratya2D offers a robust and versatile tool for regional-scale basin modelling, effectively addressing diverse geoscientific challenges.</div></div>","PeriodicalId":55221,"journal":{"name":"Computers & Geosciences","volume":"207 ","pages":"Article 106056"},"PeriodicalIF":4.4,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145160177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01Epub Date: 2025-10-14DOI: 10.1016/j.cageo.2025.106065
Meng Guo , Bingshou He , Qianqian Ci
The imaging of P-wave and S-wave in reverse time migration (RTM) of elastic waves is often achieved by cross-correlating P-waves or S-waves with different propagation directions. This requires us to obtain the Poynting vector or optical flow vector of each imaging point at different times during the wavefield extrapolation process and use it to indicate the direction of wave propagation. But we can only obtain the Poynting vector of the mixed wavefield of P-wave and S- waves, and we cannot obtain the Poynting vector of pure P-wave or pure S-wave when using the existing velocity-stress elastic wave equations for the wavefield extrapolation process. Therefore, the propagation direction obtained is also a mixed wavefield rather than pure P-wave or pure S-wave, and this does not meet the requirements for elastic wave RTM and will cause errors. The existing first-order velocity-dilation-rotation elastic wave equation, although it overcomes the aforementioned issues, cannot accurately describe the law of wave propagation at the wave impedance interface due to the assumption of a homogeneous medium. Especially when the interface of P-wave and S-wave velocities is not consistent, it will lead to errors in the reflection, transmission, and conversion wavefields when using this equation for elastic wavefield extrapolation. In addition, severe energy leakage effects will occur at the interface of S-wave velocity when using this equation, which will lead to inaccurate S-wave imaging. In this paper, we propose a new elastic wave equation for decoupling P-wave and S-waves based on the assumption of an inhomogeneous medium, which not only gives the propagation direction of pure P-wave and pure S-wave, but also completely overcomes the above problems. Using the new equation of the Poynting vector in the elastic wave field to perform cross-correlation imaging, the model calculations show that the imaging results eliminate the noise generated by RTM, demonstrating the accuracy and applicability of the equation.
{"title":"A new elastic wave equation for decoupling P-wave and S-waves and its application","authors":"Meng Guo , Bingshou He , Qianqian Ci","doi":"10.1016/j.cageo.2025.106065","DOIUrl":"10.1016/j.cageo.2025.106065","url":null,"abstract":"<div><div>The imaging of P-wave and S-wave in reverse time migration (RTM) of elastic waves is often achieved by cross-correlating P-waves or S-waves with different propagation directions. This requires us to obtain the Poynting vector or optical flow vector of each imaging point at different times during the wavefield extrapolation process and use it to indicate the direction of wave propagation. But we can only obtain the Poynting vector of the mixed wavefield of P-wave and S- waves, and we cannot obtain the Poynting vector of pure P-wave or pure S-wave when using the existing velocity-stress elastic wave equations for the wavefield extrapolation process. Therefore, the propagation direction obtained is also a mixed wavefield rather than pure P-wave or pure S-wave, and this does not meet the requirements for elastic wave RTM and will cause errors. The existing first-order velocity-dilation-rotation elastic wave equation, although it overcomes the aforementioned issues, cannot accurately describe the law of wave propagation at the wave impedance interface due to the assumption of a homogeneous medium. Especially when the interface of P-wave and S-wave velocities is not consistent, it will lead to errors in the reflection, transmission, and conversion wavefields when using this equation for elastic wavefield extrapolation. In addition, severe energy leakage effects will occur at the interface of S-wave velocity when using this equation, which will lead to inaccurate S-wave imaging. In this paper, we propose a new elastic wave equation for decoupling P-wave and S-waves based on the assumption of an inhomogeneous medium, which not only gives the propagation direction of pure P-wave and pure S-wave, but also completely overcomes the above problems. Using the new equation of the Poynting vector in the elastic wave field to perform cross-correlation imaging, the model calculations show that the imaging results eliminate the noise generated by RTM, demonstrating the accuracy and applicability of the equation.</div></div>","PeriodicalId":55221,"journal":{"name":"Computers & Geosciences","volume":"207 ","pages":"Article 106065"},"PeriodicalIF":4.4,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145364036","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01Epub Date: 2025-09-20DOI: 10.1016/j.cageo.2025.106058
Emmanuel Biabiany , Ruben Bagghi , Didier C. Bernard , Vincent Pagé , Stéphane Cholet , Raphaël Cécé
This study investigates precipitation patterns in the Caribbean region using a novel Multi-Expert Distance (MED) metric for clustering analysis. MED integrates multiple climate parameters, including Sea Surface Temperature (SST), wind components at 925 hPa, and Outgoing Longwave Radiation (OLR), with the objective of enhancing spatiotemporal precipitation analysis. This approach offers an alternative to conventional methods that rely on single datasets and Euclidean distances. It combines physical parameters during clustering to enhance accuracy and insights. The analysis encompasses a 43-year period (1979–2021), extending from the Gulf of Mexico to the Caribbean, with a spatial extent that covers the entire region. The MED metric incorporates zone-specific histograms and Kullback-Leibler divergence, enabling dynamic comparisons of atmospheric configurations. The analysis yielded six distinct clusters, each exhibiting unique seasonal and inter-annual precipitation patterns, influenced by regional atmospheric dynamics. The analysis revealed significant transitions and associations between clusters, precipitation levels, and atmospheric conditions. Clusters representing dry conditions exhibited negative SST anomalies, reflecting reduced moisture production. Conversely, clusters exhibiting high precipitation exhibited positive SST anomalies, which are conducive to moisture accumulation. Furthermore, tropical storms and hurricanes were predominantly observed in wetter clusters, underscoring the utility of MED in linking atmospheric phenomena with climatic impacts. The results highlight the effectiveness of the MED in improving both the accuracy and interpretability of clustering algorithms. Beyond its methodological contributions, this work highlights the MED's potential to advance the understanding and forecasting of precipitation regimes, thereby contributing to more robust climate analyses. Such insights are particularly relevant for informing climate adaptation strategies in vulnerable regions, notably the Caribbean. Future research could investigate automated domain segmentation as a means of further refining and optimizing this approach.
{"title":"A new multi-expert distance for clustering climate parameters: a Caribbean precipitation case study","authors":"Emmanuel Biabiany , Ruben Bagghi , Didier C. Bernard , Vincent Pagé , Stéphane Cholet , Raphaël Cécé","doi":"10.1016/j.cageo.2025.106058","DOIUrl":"10.1016/j.cageo.2025.106058","url":null,"abstract":"<div><div>This study investigates precipitation patterns in the Caribbean region using a novel Multi-Expert Distance (MED) metric for clustering analysis. MED integrates multiple climate parameters, including Sea Surface Temperature (SST), wind components at 925 hPa, and Outgoing Longwave Radiation (OLR), with the objective of enhancing spatiotemporal precipitation analysis. This approach offers an alternative to conventional methods that rely on single datasets and Euclidean distances. It combines physical parameters during clustering to enhance accuracy and insights. The analysis encompasses a 43-year period (1979–2021), extending from the Gulf of Mexico to the Caribbean, with a spatial extent that covers the entire region. The MED metric incorporates zone-specific histograms and Kullback-Leibler divergence, enabling dynamic comparisons of atmospheric configurations. The analysis yielded six distinct clusters, each exhibiting unique seasonal and inter-annual precipitation patterns, influenced by regional atmospheric dynamics. The analysis revealed significant transitions and associations between clusters, precipitation levels, and atmospheric conditions. Clusters representing dry conditions exhibited negative SST anomalies, reflecting reduced moisture production. Conversely, clusters exhibiting high precipitation exhibited positive SST anomalies, which are conducive to moisture accumulation. Furthermore, tropical storms and hurricanes were predominantly observed in wetter clusters, underscoring the utility of MED in linking atmospheric phenomena with climatic impacts. The results highlight the effectiveness of the MED in improving both the accuracy and interpretability of clustering algorithms. Beyond its methodological contributions, this work highlights the MED's potential to advance the understanding and forecasting of precipitation regimes, thereby contributing to more robust climate analyses. Such insights are particularly relevant for informing climate adaptation strategies in vulnerable regions, notably the Caribbean. Future research could investigate automated domain segmentation as a means of further refining and optimizing this approach.</div></div>","PeriodicalId":55221,"journal":{"name":"Computers & Geosciences","volume":"207 ","pages":"Article 106058"},"PeriodicalIF":4.4,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145109370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01Epub Date: 2025-11-04DOI: 10.1016/j.cageo.2025.106078
Zhewen Xu , Baoxiang Pan , Xiaohui Wei , Hongliang Li , Dongyuan Tian , Zijian Li , Changzheng Liu
The rapid development of large climate models has created the requirement of storing and transferring massive atmospheric data worldwide, which requires an efficient compression method. However, traditional compression algorithms exhibit limited efficiency in compressing atmospheric data. As an emerging technique, Implicit Neural Representation (INR) has recently gained significant momentum and shows great potential for the compression of diverse atmospheric data. Nevertheless, it presents significant challenges in addressing the complex spatio-temporal characteristics and variability. Therefore, we propose Hierarchical Harmonic decomposition implicit neural compression (HiHa) for atmospheric data. HiHa firstly segments the data into multi-frequency signals through harmonic decomposition, and then tackles each harmonic with a frequency-based hierarchical compression module consisting of sparse storage, multi-scale INR and iterative decomposition sub-modules. We additionally design a temporal residual compression module to accelerate compression by utilizing temporal continuity. Experiments depict that HiHa can achieve: (1) compression in 308 s, error within 1e-5; (2) compression in 43 s, error within 1e-3. The results outperform both mainstream compressors and other INR-based methods, and demonstrate that using HiHa in existing data-driven models can achieve the same accuracy as raw data.
{"title":"HiHa: Introducing hierarchical harmonic decomposition to implicit neural compression for atmospheric data","authors":"Zhewen Xu , Baoxiang Pan , Xiaohui Wei , Hongliang Li , Dongyuan Tian , Zijian Li , Changzheng Liu","doi":"10.1016/j.cageo.2025.106078","DOIUrl":"10.1016/j.cageo.2025.106078","url":null,"abstract":"<div><div>The rapid development of large climate models has created the requirement of storing and transferring massive atmospheric data worldwide, which requires an efficient compression method. However, traditional compression algorithms exhibit limited efficiency in compressing atmospheric data. As an emerging technique, Implicit Neural Representation (INR) has recently gained significant momentum and shows great potential for the compression of diverse atmospheric data. Nevertheless, it presents significant challenges in addressing the complex spatio-temporal characteristics and variability. Therefore, we propose Hierarchical Harmonic decomposition implicit neural compression (HiHa) for atmospheric data. HiHa firstly segments the data into multi-frequency signals through harmonic decomposition, and then tackles each harmonic with a frequency-based hierarchical compression module consisting of sparse storage, multi-scale INR and iterative decomposition sub-modules. We additionally design a temporal residual compression module to accelerate compression by utilizing temporal continuity. Experiments depict that HiHa can achieve: (1) <span><math><mrow><mn>27</mn><mo>×</mo></mrow></math></span> compression in 308 s, error within 1e-5; (2) <span><math><mrow><mn>244</mn><mo>×</mo></mrow></math></span> compression in 43 s, error within 1e-3. The results outperform both mainstream compressors and other INR-based methods, and demonstrate that using HiHa in existing data-driven models can achieve the same accuracy as raw data.</div></div>","PeriodicalId":55221,"journal":{"name":"Computers & Geosciences","volume":"207 ","pages":"Article 106078"},"PeriodicalIF":4.4,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145467037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01Epub Date: 2025-11-03DOI: 10.1016/j.cageo.2025.106073
Tingting Lin , Qingyue Wang , Chuandong Jiang , Chunpeng Ren , Yunzhi Wang , Liang Wang
Surface nuclear magnetic resonance (SNMR) is a geophysical extension of nuclear magnetic resonance (NMR) that enables non-invasive mapping of subsurface hydrogeological properties by measuring the relaxation response of groundwater hydrogen nuclei. Accurately modeling the transient spin dynamics in SNMR requires solving the full-Bloch equations under Earth’s geomagnetic field, where magnetic field inhomogeneities, multicomponent relaxation, and nonlinear pulsed excitations introduce significant mathematical and computational challenges. We present a spectral-diagonalization-based matrix exponential integration (SD-MEI) algorithm for efficient and stable solutions of full-Bloch equations in SNMR. Conventional explicit numerical methods exhibit cumulative discretization errors and escalating computational costs due to step-size dependence and finite precision limitations. SD-MEI integrates spectral diagonalization with matrix exponential operations, replacing iterative computations with a single eigendecomposition of the system matrix. This approach achieves parameter-robust computational complexity while maintaining numerical stability across broad field strengths (10−10 T to 10−5 T) and relaxation times (10 ms to 1000 ms). Validated for steady-state free precession (SSFP) dynamics in heterogeneous geomagnetic environments, the method enables high-accuracy modeling of transient magnetization evolution with large time steps. The framework advances SNMR efficient forward modeling and inversion while optimizing protocols by resolving critical limitations in existing numerical and analytical approaches.
{"title":"Spectral-diagonalization-based matrix exponential integration for efficient and stable solutions of full-Bloch equations in surface NMR","authors":"Tingting Lin , Qingyue Wang , Chuandong Jiang , Chunpeng Ren , Yunzhi Wang , Liang Wang","doi":"10.1016/j.cageo.2025.106073","DOIUrl":"10.1016/j.cageo.2025.106073","url":null,"abstract":"<div><div>Surface nuclear magnetic resonance (SNMR) is a geophysical extension of nuclear magnetic resonance (NMR) that enables non-invasive mapping of subsurface hydrogeological properties by measuring the relaxation response of groundwater hydrogen nuclei. Accurately modeling the transient spin dynamics in SNMR requires solving the full-Bloch equations under Earth’s geomagnetic field, where magnetic field inhomogeneities, multicomponent relaxation, and nonlinear pulsed excitations introduce significant mathematical and computational challenges. We present a spectral-diagonalization-based matrix exponential integration (SD-MEI) algorithm for efficient and stable solutions of full-Bloch equations in SNMR. Conventional explicit numerical methods exhibit cumulative discretization errors and escalating computational costs due to step-size dependence and finite precision limitations. SD-MEI integrates spectral diagonalization with matrix exponential operations, replacing iterative computations with a single eigendecomposition of the system matrix. This approach achieves parameter-robust computational complexity while maintaining numerical stability across broad <span><math><msub><mrow><mi>B</mi></mrow><mrow><mn>1</mn></mrow></msub></math></span> field strengths (10<sup>−10</sup> T to 10<sup>−5</sup> T) and relaxation times (10 ms to 1000 ms). Validated for steady-state free precession (SSFP) dynamics in heterogeneous geomagnetic environments, the method enables high-accuracy modeling of transient magnetization evolution with large time steps. The framework advances SNMR efficient forward modeling and inversion while optimizing protocols by resolving critical limitations in existing numerical and analytical approaches.</div></div>","PeriodicalId":55221,"journal":{"name":"Computers & Geosciences","volume":"207 ","pages":"Article 106073"},"PeriodicalIF":4.4,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145467035","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01Epub Date: 2025-12-12DOI: 10.1016/j.cageo.2025.106093
Yunlei Sun , Danning Qi , Tiancheng Chen , Ke Xu , Pengxiao Shi , Yongfei Yang
Digital rock technology is a critical technique for precise reservoir characterization and the optimization of oil and gas extraction. However, the high cost of rock sample acquisition and labor-intensive manual labeling lead to data scarcity, significantly hindering deep learning applications in geosciences and petroleum engineering. Existing rock image generation methods often suffer from limited fidelity and lack semantic control, inadequate for high-precision analysis. To address these challenges, we propose the Rock Image Semantic Diffusion Generative Model (RockSDM), a novel diffusion-based generative framework that introduces semantic control into 2D rock image generation for the first time. RockSDM overcomes data scarcity by generating high-quality rock images and pixel-level masks in a coordinated manner, ensuring both microstructural consistency and geological realism. Experimental results demonstrate that RockSDM significantly outperforms existing models in FID and KID. Moreover, the synthetic data generated by RockSDM substantially enhances segmentation performance in data-constrained scenarios. On the TriBSE dataset, RockSDM improves the mIoU by 17.9%, with the IoU of low-frequency categories increasing by up to 71.9%. This effectively mitigates data imbalance and improves model generalization. By reducing the cost of rock sample acquisition and manual annotation, RockSDM offers a powerful data augmentation tool, potentially accelerating deep learning adoption in geosciences and petroleum engineering.
{"title":"RockSDM: High-fidelity 2D rock image generation via semantic diffusion for digital rock applications","authors":"Yunlei Sun , Danning Qi , Tiancheng Chen , Ke Xu , Pengxiao Shi , Yongfei Yang","doi":"10.1016/j.cageo.2025.106093","DOIUrl":"10.1016/j.cageo.2025.106093","url":null,"abstract":"<div><div>Digital rock technology is a critical technique for precise reservoir characterization and the optimization of oil and gas extraction. However, the high cost of rock sample acquisition and labor-intensive manual labeling lead to data scarcity, significantly hindering deep learning applications in geosciences and petroleum engineering. Existing rock image generation methods often suffer from limited fidelity and lack semantic control, inadequate for high-precision analysis. To address these challenges, we propose the Rock Image Semantic Diffusion Generative Model (RockSDM), a novel diffusion-based generative framework that introduces semantic control into 2D rock image generation for the first time. RockSDM overcomes data scarcity by generating high-quality rock images and pixel-level masks in a coordinated manner, ensuring both microstructural consistency and geological realism. Experimental results demonstrate that RockSDM significantly outperforms existing models in FID and KID. Moreover, the synthetic data generated by RockSDM substantially enhances segmentation performance in data-constrained scenarios. On the TriBSE dataset, RockSDM improves the mIoU by 17.9%, with the IoU of low-frequency categories increasing by up to 71.9%. This effectively mitigates data imbalance and improves model generalization. By reducing the cost of rock sample acquisition and manual annotation, RockSDM offers a powerful data augmentation tool, potentially accelerating deep learning adoption in geosciences and petroleum engineering.</div></div>","PeriodicalId":55221,"journal":{"name":"Computers & Geosciences","volume":"208 ","pages":"Article 106093"},"PeriodicalIF":4.4,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145791270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01Epub Date: 2025-10-30DOI: 10.1016/j.cageo.2025.106070
Emmy Scott, Melody Whitehead, Jonathan Procter
This review examines the current landscape of computational volcanic hazard models, focusing on their creation and application, for a diverse set of end-users’ short-term and long-term forecasting requirements. We provide a comprehensive classification of volcanic hazard models, categorising them according to their theoretical foundations. This is central to understanding the diversity of hazard characterisation and simulation approaches, from empirical models to computationally demanding physics-based numerical models. The classification framework helps contextualise the strengths and limitations of different models and their suitability for specific forecasting demands. We discuss the fundamental principles behind model construction, considering factors such as input parameters, conceptual frameworks, and the incorporation of uncertainties. We also synthesise existing literature on model testing, covering aspects such as model verification, validation, calibration, and benchmarking, and provide a systematic and transparent framework for model selection, considering data availability, computational constraints, and specific forecasting needs. We explore the balance between model complexity, computational efficiency, and accuracy, addressing the uncertainties inherent in both input parameters and model processes. A key focus is the role of input parameters in forecasting and the need to select models that are detailed enough to capture essential hazard dynamics, yet simple enough to minimise error and computational costs.
{"title":"Exploring the role of model classification, complexity, and selection in volcanic hazard forecasting","authors":"Emmy Scott, Melody Whitehead, Jonathan Procter","doi":"10.1016/j.cageo.2025.106070","DOIUrl":"10.1016/j.cageo.2025.106070","url":null,"abstract":"<div><div>This review examines the current landscape of computational volcanic hazard models, focusing on their creation and application, for a diverse set of end-users’ short-term and long-term forecasting requirements. We provide a comprehensive classification of volcanic hazard models, categorising them according to their theoretical foundations. This is central to understanding the diversity of hazard characterisation and simulation approaches, from empirical models to computationally demanding physics-based numerical models. The classification framework helps contextualise the strengths and limitations of different models and their suitability for specific forecasting demands. We discuss the fundamental principles behind model construction, considering factors such as input parameters, conceptual frameworks, and the incorporation of uncertainties. We also synthesise existing literature on model testing, covering aspects such as model verification, validation, calibration, and benchmarking, and provide a systematic and transparent framework for model selection, considering data availability, computational constraints, and specific forecasting needs. We explore the balance between model complexity, computational efficiency, and accuracy, addressing the uncertainties inherent in both input parameters and model processes. A key focus is the role of input parameters in forecasting and the need to select models that are detailed enough to capture essential hazard dynamics, yet simple enough to minimise error and computational costs.</div></div>","PeriodicalId":55221,"journal":{"name":"Computers & Geosciences","volume":"207 ","pages":"Article 106070"},"PeriodicalIF":4.4,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145417461","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01Epub Date: 2025-09-10DOI: 10.1016/j.cageo.2025.106053
Youcheng Song , Haijun Wang , Xiaoxu Cao , Bin Zhang , Jialin Xie , Zhijia Gong , Yaotao Liang , Zongyou He , Guanxian Huang
The integration of the first law of geography into land use change simulation models has attracted considerable attention, aiming to improve model accuracy through the enhanced representation of spatial heterogeneity. However, existing evaluation metrics, which primarily focus on cell-to-cell agreements, inadequately capture the models' ability to represent spatial heterogeneity. Consequently, there is a pressing need for updated evaluation metrics that accurately reflect the models' capability to depict spatial features. To address this issue, the Fuzzy Figure of Merit (Fuzzy FoM) grounded in fuzzy theory was proposed. This metric effectively quantifies and visualizes a model's ability to capture spatial features by introducing the notion of degree of membership, facilitating a comprehensive analysis of model accuracy from both statistical and spatial perspectives. This paper demonstrates the metric's utility in the validation process, illustrating four land use change models that incorporate the spatial heterogeneity.
{"title":"A novel metric to assess the accuracy of land use change modeling","authors":"Youcheng Song , Haijun Wang , Xiaoxu Cao , Bin Zhang , Jialin Xie , Zhijia Gong , Yaotao Liang , Zongyou He , Guanxian Huang","doi":"10.1016/j.cageo.2025.106053","DOIUrl":"10.1016/j.cageo.2025.106053","url":null,"abstract":"<div><div>The integration of the first law of geography into land use change simulation models has attracted considerable attention, aiming to improve model accuracy through the enhanced representation of spatial heterogeneity. However, existing evaluation metrics, which primarily focus on cell-to-cell agreements, inadequately capture the models' ability to represent spatial heterogeneity. Consequently, there is a pressing need for updated evaluation metrics that accurately reflect the models' capability to depict spatial features. To address this issue, the Fuzzy Figure of Merit (Fuzzy FoM) grounded in fuzzy theory was proposed. This metric effectively quantifies and visualizes a model's ability to capture spatial features by introducing the notion of degree of membership, facilitating a comprehensive analysis of model accuracy from both statistical and spatial perspectives. This paper demonstrates the metric's utility in the validation process, illustrating four land use change models that incorporate the spatial heterogeneity.</div></div>","PeriodicalId":55221,"journal":{"name":"Computers & Geosciences","volume":"207 ","pages":"Article 106053"},"PeriodicalIF":4.4,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145098951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-01Epub Date: 2025-09-27DOI: 10.1016/j.cageo.2025.106059
Keran Li , Yujie Gao , Yingjie Ma , Chengkun Li , Junjie Ye , Hao Yu , Yiming Xu , Dongyu Zheng , Ardiansyah Koeshidayatullah
Microscopic analysis is the cornerstone to uncover petrological and mineralogical characteristics of carbonate rocks. In addition, such information is critical for precise identification of carbonate microfacies and diagenetic evolution. This type of information is important, but relies too much on manual experience, which is time-consuming and laborious. Recently, several successful deep learning models showed great potential in the identification process. However, current deep learning models have typically complex model architectures greatly hinder the deployment-inference in practical and lightweight environments. To overcome the difficulty of deep learning models in reasoning in actual edge scenes, a three-stage segmentation method by weakly supervised learning was proposed. The approach embeds class activation mapping (CAM), grey level co-occurrence matrix (GLCM), and knowledge distillation (KD) modules to achieve attention transfer to the lightweight network (CamNet). Furthermore, based on the performance of the model algorithm and application requirements, a lightweight carbonate thin section image-assistant recognition system has been developed. Through ingenious control flow design, this system achieves an effective balance between runtime latency and resource consumption, demonstrating superior performance metrics. Experimental results indicate that CamNet’s total parameter count is only 800k. When deployed in embedded systems, CamNet achieves an inference speed of 6.87 fps. Our successful development verifies the efficiency and practicality in marginal devices.
{"title":"Weakly supervised semantic segmentation of microscopic carbonates on marginal devices","authors":"Keran Li , Yujie Gao , Yingjie Ma , Chengkun Li , Junjie Ye , Hao Yu , Yiming Xu , Dongyu Zheng , Ardiansyah Koeshidayatullah","doi":"10.1016/j.cageo.2025.106059","DOIUrl":"10.1016/j.cageo.2025.106059","url":null,"abstract":"<div><div>Microscopic analysis is the cornerstone to uncover petrological and mineralogical characteristics of carbonate rocks. In addition, such information is critical for precise identification of carbonate microfacies and diagenetic evolution. This type of information is important, but relies too much on manual experience, which is time-consuming and laborious. Recently, several successful deep learning models showed great potential in the identification process. However, current deep learning models have typically complex model architectures greatly hinder the deployment-inference in practical and lightweight environments. To overcome the difficulty of deep learning models in reasoning in actual edge scenes, a three-stage segmentation method by weakly supervised learning was proposed. The approach embeds class activation mapping (CAM), grey level co-occurrence matrix (GLCM), and knowledge distillation (KD) modules to achieve attention transfer to the lightweight network (CamNet). Furthermore, based on the performance of the model algorithm and application requirements, a lightweight carbonate thin section image-assistant recognition system has been developed. Through ingenious control flow design, this system achieves an effective balance between runtime latency and resource consumption, demonstrating superior performance metrics. Experimental results indicate that CamNet’s total parameter count is only 800k. When deployed in embedded systems, CamNet achieves an inference speed of 6.87 fps. Our successful development verifies the efficiency and practicality in marginal devices.</div></div>","PeriodicalId":55221,"journal":{"name":"Computers & Geosciences","volume":"207 ","pages":"Article 106059"},"PeriodicalIF":4.4,"publicationDate":"2026-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145269657","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}