Pub Date : 2026-02-02DOI: 10.1016/j.cageo.2026.106135
Zhipeng Dong , Yanxiong Liu , Yikai Feng , Kai Guo , Yilan Chen , Yanli Wang
Shallow water area extraction from multispectral remote sensing images is a key component of satellite derived bathymetry (SDB). With the respect to the issues of susceptibility to image noise and difficulty in accurately setting spectral extraction thresholds during the extraction process in shallow water areas, the paper proposes a shallow water area extraction method for multispectral remote sensing images based on adaptive object NDWI thresholding. First, the image is segmented to generate superpixel objects using the simple linear iterative clustering algorithm, and the normalized difference water index (NDWI) is calculated for each object. Second, the optimal threshold for NDWI in shallow water areas is obtained based on an object adaptive threshold calculation algorithm, and the initial shallow water area is extracted based on the optimal NDWI threshold. Finally, the initial shallow water area is refined using a region growing algorithm. The proposed method is compared with some state-of-the-art shallow water area extraction algorithms using six islands and near-shore areas under different environmental conditions. The experimental results show that the proposed method outperforms other shallow water area extraction algorithms, and can accurately extract the shallow water area around the islands and coastal zones under different environmental conditions.
{"title":"Shallow water area extraction method for multispectral remote sensing imagery based on adaptive object NDWI thresholding","authors":"Zhipeng Dong , Yanxiong Liu , Yikai Feng , Kai Guo , Yilan Chen , Yanli Wang","doi":"10.1016/j.cageo.2026.106135","DOIUrl":"10.1016/j.cageo.2026.106135","url":null,"abstract":"<div><div>Shallow water area extraction from multispectral remote sensing images is a key component of satellite derived bathymetry (SDB). With the respect to the issues of susceptibility to image noise and difficulty in accurately setting spectral extraction thresholds during the extraction process in shallow water areas, the paper proposes a shallow water area extraction method for multispectral remote sensing images based on adaptive object NDWI thresholding. First, the image is segmented to generate superpixel objects using the simple linear iterative clustering algorithm, and the normalized difference water index (NDWI) is calculated for each object. Second, the optimal threshold for NDWI in shallow water areas is obtained based on an object adaptive threshold calculation algorithm, and the initial shallow water area is extracted based on the optimal NDWI threshold. Finally, the initial shallow water area is refined using a region growing algorithm. The proposed method is compared with some state-of-the-art shallow water area extraction algorithms using six islands and near-shore areas under different environmental conditions. The experimental results show that the proposed method outperforms other shallow water area extraction algorithms, and can accurately extract the shallow water area around the islands and coastal zones under different environmental conditions.</div></div>","PeriodicalId":55221,"journal":{"name":"Computers & Geosciences","volume":"210 ","pages":"Article 106135"},"PeriodicalIF":4.4,"publicationDate":"2026-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146102918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-20DOI: 10.1016/j.cageo.2026.106119
Qian Feng , Xiaofeng Xu , Ren Wang , Wanzhong Shi , Qintao Guo , Bowei Yang , Zijie Zhang , Xiaoming Zhang , Mehdi Ostadhassan
The construction of 3D digital core models for heterogeneous shale samples presents significant challenges due to the multi-scale nature of these materials. In this study, we propose an optimized generative adversarial network (GAN) method, known as SliceGAN-RFB, to overcome these challenges by leveraging scanning electron microscopy (SEM) images for digital core reconstruction. The SliceGAN-RFB model entails key improvements over the traditional SliceGAN, including the use of the Sobel operator for gradient-based image subset extraction and the integration of the receptive field block (RFB) into the discriminator network to enhance multi-scale feature extraction. SliceGAN-RFB attempts multiple critical iterations (here, we call them critic iters) during the training phase to increase the quality of the generated digital core model. The results demonstrated that the digital cores generated by SliceGAN-RFB better resembled the true samples, where the clay minerals exhibited greater continuity. Additionally, the spatial distributions of the pores and pyrite within the digital core were found to be closer to those in the original two-dimensional SEM images. The two-point connectivity probability function (2 PC) curve further validated that the digital model generated by SliceGAN-RFB was more accurate and consistent with the original data than the SliceGAN was. Ultimately, the generation of digital cores by the SliceGAN-RFB is particularly important when dealing with heterogeneous materials such as shale in comparison with homogeneous materials such as sandstone or battery components. This enhanced capability enables the establishment of pore network models and flow simulations of shale and other heterogeneous materials, which are important in various fields, including hydrogen storage, carbon capture, utilization and storage (CCUS), and nuclear waste containment.
{"title":"Three-dimensional digital core reconstruction from 2D SEM images of heterogeneous shale samples","authors":"Qian Feng , Xiaofeng Xu , Ren Wang , Wanzhong Shi , Qintao Guo , Bowei Yang , Zijie Zhang , Xiaoming Zhang , Mehdi Ostadhassan","doi":"10.1016/j.cageo.2026.106119","DOIUrl":"10.1016/j.cageo.2026.106119","url":null,"abstract":"<div><div>The construction of 3D digital core models for heterogeneous shale samples presents significant challenges due to the multi-scale nature of these materials. In this study, we propose an optimized generative adversarial network (GAN) method, known as SliceGAN-RFB, to overcome these challenges by leveraging scanning electron microscopy (SEM) images for digital core reconstruction. The SliceGAN-RFB model entails key improvements over the traditional SliceGAN, including the use of the Sobel operator for gradient-based image subset extraction and the integration of the receptive field block (RFB) into the discriminator network to enhance multi-scale feature extraction. SliceGAN-RFB attempts multiple critical iterations (here, we call them critic iters) during the training phase to increase the quality of the generated digital core model. The results demonstrated that the digital cores generated by SliceGAN-RFB better resembled the true samples, where the clay minerals exhibited greater continuity. Additionally, the spatial distributions of the pores and pyrite within the digital core were found to be closer to those in the original two-dimensional SEM images. The two-point connectivity probability function (2 PC) curve further validated that the digital model generated by SliceGAN-RFB was more accurate and consistent with the original data than the SliceGAN was. Ultimately, the generation of digital cores by the SliceGAN-RFB is particularly important when dealing with heterogeneous materials such as shale in comparison with homogeneous materials such as sandstone or battery components. This enhanced capability enables the establishment of pore network models and flow simulations of shale and other heterogeneous materials, which are important in various fields, including hydrogen storage, carbon capture, utilization and storage (CCUS), and nuclear waste containment.</div></div>","PeriodicalId":55221,"journal":{"name":"Computers & Geosciences","volume":"209 ","pages":"Article 106119"},"PeriodicalIF":4.4,"publicationDate":"2026-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146079103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-19DOI: 10.1016/j.cageo.2026.106117
Yang He , Qi Chen , Zhifang Zhao , Dayu Cai , Liu Ouyang , Xiaoxiao Zhang , Yu Gao , Junrong Zhou
The Gravity Recovery and Climate Experiment (GRACE) dataset has emerged as a pivotal tool for quantifying terrestrial water storage (TWS) anomalies at regional scales. However, its coarse spatial resolution (∼3°) introduces substantial uncertainties in localized hydrological analyses. To overcome this limitation, we developed a spatiotemporal deep learning framework that synergistically integrates Convolutional Neural Networks (CNNs) and Bidirectional Long Short-Term Memory networks (BiLSTM), enhanced by a time-space attention mechanism. Applied to Yunnan Province, China, this framework achieved a tenfold resolution enhancement (1°–0.1°), preserving high consistency with raw GRACE data (cc = 0.94). Validation against independent datasets demonstrated a 6–15 % improvement in Coefficient of Determination (R2) over conventional downscaling methods, while maintaining moderate to strong correlations (r = 0.53–0.74) with WGHM products and river-lake water level data. Multivariate analysis revealed statistically significant couplings between downscaled TWS variations and key environmental drivers, including soil moisture (SoilMoi), land surface temperature (LST), evapotranspiration (E), the Normalized Difference Vegetation Index (NDVI), and precipitation (TP). The refined GRACE Drought Severity Index (GRACE-DSI) exhibited enhanced synchronization with the Standardized Precipitation Evapotranspiration Index (SPEI), showing a >10 % increase in correlation coefficients compared to pre-downscaling values. This methodological advancement enabled precise spatiotemporal characterization of drought dynamics during the 2002–2023 period, particularly capturing the 2009–2012 extreme drought and 2019–2021 pluvial anomalies with sub-basin spatial fidelity. Our framework provides an operational solution for high-resolution hydrological monitoring, offering critical insights for adaptive water resource management in topographically complex regions.
{"title":"A spatiotemporal deep learning framework integrating CNN-BiLSTM and attention mechanisms for GRACE data downscaling in Yunnan Province","authors":"Yang He , Qi Chen , Zhifang Zhao , Dayu Cai , Liu Ouyang , Xiaoxiao Zhang , Yu Gao , Junrong Zhou","doi":"10.1016/j.cageo.2026.106117","DOIUrl":"10.1016/j.cageo.2026.106117","url":null,"abstract":"<div><div>The Gravity Recovery and Climate Experiment (GRACE) dataset has emerged as a pivotal tool for quantifying terrestrial water storage (TWS) anomalies at regional scales. However, its coarse spatial resolution (∼3°) introduces substantial uncertainties in localized hydrological analyses. To overcome this limitation, we developed a spatiotemporal deep learning framework that synergistically integrates Convolutional Neural Networks (CNNs) and Bidirectional Long Short-Term Memory networks (BiLSTM), enhanced by a time-space attention mechanism. Applied to Yunnan Province, China, this framework achieved a tenfold resolution enhancement (1°–0.1°), preserving high consistency with raw GRACE data (cc = 0.94). Validation against independent datasets demonstrated a 6–15 % improvement in Coefficient of Determination (R<sup>2</sup>) over conventional downscaling methods, while maintaining moderate to strong correlations (r = 0.53–0.74) with WGHM products and river-lake water level data. Multivariate analysis revealed statistically significant couplings between downscaled TWS variations and key environmental drivers, including soil moisture (SoilMoi), land surface temperature (LST), evapotranspiration (E), the Normalized Difference Vegetation Index (NDVI), and precipitation (TP). The refined GRACE Drought Severity Index (GRACE-DSI) exhibited enhanced synchronization with the Standardized Precipitation Evapotranspiration Index (SPEI), showing a >10 % increase in correlation coefficients compared to pre-downscaling values. This methodological advancement enabled precise spatiotemporal characterization of drought dynamics during the 2002–2023 period, particularly capturing the 2009–2012 extreme drought and 2019–2021 pluvial anomalies with sub-basin spatial fidelity. Our framework provides an operational solution for high-resolution hydrological monitoring, offering critical insights for adaptive water resource management in topographically complex regions.</div></div>","PeriodicalId":55221,"journal":{"name":"Computers & Geosciences","volume":"209 ","pages":"Article 106117"},"PeriodicalIF":4.4,"publicationDate":"2026-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146039336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-15DOI: 10.1016/j.cageo.2026.106123
Kaifeng Gao , Michael Hillier , Florian Wellmann
Three-dimensional geological modelling is an essential tool for understanding subsurface features, supporting advanced exploration of natural resources, their sustainable development, and the identification of optimal locations for carbon storage. Recently, efficient neural network approaches have been developed to handle large datasets and to integrate diverse observations and prior knowledge into geological models. Previous work has demonstrated that neural networks are powerful tools for geological modelling, but quantifying uncertainty in their predictions remains an open issue. In this work, we address the uncertainty arising from both network parameters and observational data. We explore the full space of possible geological model realizations using a Hamiltonian Monte Carlo sampler, and quantify the uncertainty of predicted geological interfaces within a Bayesian neural network framework. Our experimental results demonstrate that the Hamiltonian Monte Carlo sampler effectively explores the posterior distribution in function space and quantifies the uncertainty of predicted geological interfaces for both a noise-free borehole dataset from the North Sea and a noisy dataset interpreted from geophysical well logs in Saskatchewan, Canada. We also apply the method to a simple faulting scenario involving a normal fault in flat stratigraphy. Furthermore, in comparison with the commonly used Monte Carlo dropout approach, the Hamiltonian Monte Carlo sampler exhibits superior accuracy in assessing epistemic uncertainty in a noise-free dataset. However, computational efficiency remains a potential challenge in large dataset and network.
{"title":"Uncertainty quantification using Hamiltonian Monte Carlo for structural geological modelling with implicit neural representations (INR)","authors":"Kaifeng Gao , Michael Hillier , Florian Wellmann","doi":"10.1016/j.cageo.2026.106123","DOIUrl":"10.1016/j.cageo.2026.106123","url":null,"abstract":"<div><div>Three-dimensional geological modelling is an essential tool for understanding subsurface features, supporting advanced exploration of natural resources, their sustainable development, and the identification of optimal locations for carbon storage. Recently, efficient neural network approaches have been developed to handle large datasets and to integrate diverse observations and prior knowledge into geological models. Previous work has demonstrated that neural networks are powerful tools for geological modelling, but quantifying uncertainty in their predictions remains an open issue. In this work, we address the uncertainty arising from both network parameters and observational data. We explore the full space of possible geological model realizations using a Hamiltonian Monte Carlo sampler, and quantify the uncertainty of predicted geological interfaces within a Bayesian neural network framework. Our experimental results demonstrate that the Hamiltonian Monte Carlo sampler effectively explores the posterior distribution in function space and quantifies the uncertainty of predicted geological interfaces for both a noise-free borehole dataset from the North Sea and a noisy dataset interpreted from geophysical well logs in Saskatchewan, Canada. We also apply the method to a simple faulting scenario involving a normal fault in flat stratigraphy. Furthermore, in comparison with the commonly used Monte Carlo dropout approach, the Hamiltonian Monte Carlo sampler exhibits superior accuracy in assessing epistemic uncertainty in a noise-free dataset. However, computational efficiency remains a potential challenge in large dataset and network.</div></div>","PeriodicalId":55221,"journal":{"name":"Computers & Geosciences","volume":"209 ","pages":"Article 106123"},"PeriodicalIF":4.4,"publicationDate":"2026-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146039331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-15DOI: 10.1016/j.cageo.2026.106121
Yingjie Huang , Zewen Wang , Min Luo , Shufang Qiu
Cloud cover frequently occludes up to 60% of optical satellite acquisitions, creating data gaps and radiometric distortions that impede continuous Earth-monitoring applications. Diffusion models have recently demonstrated significant potential for image restoration, but their direct use in cloud removal remains limited by two factors: slow inference due to iterative denoising in high-dimensional pixel space and insufficient preservation of fine structural details, often resulting in texture blurring and boundary artifacts. To address these limitations, we propose WaveDiffDecloud, a wavelet-domain conditional diffusion framework for efficient and high-fidelity cloud removal. Instead of generating pixels directly, our method learns to synthesize the wavelet coefficients of cloud-free images, conditioned on cloudy inputs. This design substantially reduces computational complexity while preserving more fine structures. To further enhance texture fidelity, we introduce a Structure- and Texture-aware High-Frequency Reconstruction module, optimized using a physics-inspired cloud-aware loss. This module explicitly models correlations among high-frequency subbands, enabling accurate recovery of surface textures and sharp boundaries at cloud edges. Experimental results on the RICE and NUAA-CR4L89 benchmarks demonstrate that WaveDiffDecloud achieves state-of-the-art performance. Notably, on the RICE-I dataset, our method achieves the best SSIM of 0.957 and LPIPS of 0.063, significantly outperforming existing methods in texture fidelity while maintaining competitive PSNR. Furthermore, our model exhibits exceptional robustness and spectral consistency across multi-band scenarios ranging from visible to thermal infrared wavelengths. These results highlight the potential of wavelet-based diffusion models to balance reconstruction fidelity and efficiency, paving the way for practical, large-scale cloud removal in optical remote sensing imagery.
{"title":"WaveDiffDecloud: Wavelet-domain conditional diffusion model for efficient cloud removal","authors":"Yingjie Huang , Zewen Wang , Min Luo , Shufang Qiu","doi":"10.1016/j.cageo.2026.106121","DOIUrl":"10.1016/j.cageo.2026.106121","url":null,"abstract":"<div><div>Cloud cover frequently occludes up to 60% of optical satellite acquisitions, creating data gaps and radiometric distortions that impede continuous Earth-monitoring applications. Diffusion models have recently demonstrated significant potential for image restoration, but their direct use in cloud removal remains limited by two factors: slow inference due to iterative denoising in high-dimensional pixel space and insufficient preservation of fine structural details, often resulting in texture blurring and boundary artifacts. To address these limitations, we propose WaveDiffDecloud, a wavelet-domain conditional diffusion framework for efficient and high-fidelity cloud removal. Instead of generating pixels directly, our method learns to synthesize the wavelet coefficients of cloud-free images, conditioned on cloudy inputs. This design substantially reduces computational complexity while preserving more fine structures. To further enhance texture fidelity, we introduce a Structure- and Texture-aware High-Frequency Reconstruction module, optimized using a physics-inspired cloud-aware loss. This module explicitly models correlations among high-frequency subbands, enabling accurate recovery of surface textures and sharp boundaries at cloud edges. Experimental results on the RICE and NUAA-CR4L89 benchmarks demonstrate that WaveDiffDecloud achieves state-of-the-art performance. Notably, on the RICE-I dataset, our method achieves the best SSIM of 0.957 and LPIPS of 0.063, significantly outperforming existing methods in texture fidelity while maintaining competitive PSNR. Furthermore, our model exhibits exceptional robustness and spectral consistency across multi-band scenarios ranging from visible to thermal infrared wavelengths. These results highlight the potential of wavelet-based diffusion models to balance reconstruction fidelity and efficiency, paving the way for practical, large-scale cloud removal in optical remote sensing imagery.</div></div>","PeriodicalId":55221,"journal":{"name":"Computers & Geosciences","volume":"209 ","pages":"Article 106121"},"PeriodicalIF":4.4,"publicationDate":"2026-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146039330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-14DOI: 10.1016/j.cageo.2026.106105
Martin Špetlík, Jan Březina
Modeling groundwater flow in three-dimensional fractured crystalline media requires capturing the spatial heterogeneity introduced by fractures. Direct numerical simulations using fine-scale discrete fracture–matrix (DFM) models are computationally demanding, particularly when repeated evaluations are needed. We aim to use a multilevel Monte Carlo (MLMC) method in the future to reduce computational cost while retaining accuracy. When transitioning between accuracy levels, numerical homogenization is used to upscale the impact of the hydraulic conductivity of sub-resolution fractures. To reduce the computational cost of conventional 3D numerical homogenization, we develop a surrogate model that predicts the equivalent hydraulic conductivity tensor, , from a voxelized 3D domain representing a tensor-valued random field of matrix and fracture hydraulic conductivities. Fracture properties, including size, orientation, and aperture, are sampled from distributions informed by natural observations. The surrogate architecture combines a 3D convolutional neural network with feed-forward layers to capture both local spatial patterns and global interactions. Three surrogates are trained on data generated by discrete fracture–matrix (DFM) simulations, each corresponding to a different fracture-to-matrix conductivity ratio. Their performance is evaluated across varying fracture network parameters and correlation lengths of the matrix field. The trained surrogates achieve high prediction accuracy () in a wide range of test scenarios. To demonstrate practical applicability, we compare conductivities upscaled by numerical homogenization and by our surrogates in two macro-scale problems: computation of equivalent tensors of hydraulic conductivity and prediction of outflow from a constrained 3D area. In both cases, the surrogate-based approach preserves accuracy while substantially reducing computational cost. Surrogate-based upscaling achieves speedups exceeding when inference is performed on a GPU.
{"title":"Convolutional surrogate for 3D discrete fracture–matrix tensor upscaling","authors":"Martin Špetlík, Jan Březina","doi":"10.1016/j.cageo.2026.106105","DOIUrl":"10.1016/j.cageo.2026.106105","url":null,"abstract":"<div><div>Modeling groundwater flow in three-dimensional fractured crystalline media requires capturing the spatial heterogeneity introduced by fractures. Direct numerical simulations using fine-scale discrete fracture–matrix (DFM) models are computationally demanding, particularly when repeated evaluations are needed. We aim to use a multilevel Monte Carlo (MLMC) method in the future to reduce computational cost while retaining accuracy. When transitioning between accuracy levels, numerical homogenization is used to upscale the impact of the hydraulic conductivity of sub-resolution fractures. To reduce the computational cost of conventional 3D numerical homogenization, we develop a surrogate model that predicts the equivalent hydraulic conductivity tensor, <span><math><msup><mrow><mi>K</mi></mrow><mrow><mi>e</mi><mi>q</mi></mrow></msup></math></span>, from a voxelized 3D domain representing a tensor-valued random field of matrix and fracture hydraulic conductivities. Fracture properties, including size, orientation, and aperture, are sampled from distributions informed by natural observations. The surrogate architecture combines a 3D convolutional neural network with feed-forward layers to capture both local spatial patterns and global interactions. Three surrogates are trained on data generated by discrete fracture–matrix (DFM) simulations, each corresponding to a different fracture-to-matrix conductivity ratio. Their performance is evaluated across varying fracture network parameters and correlation lengths of the matrix field. The trained surrogates achieve high prediction accuracy (<span><math><mrow><mtext>NRMSE</mtext><mo><</mo><mn>0</mn><mo>.</mo><mn>22</mn></mrow></math></span>) in a wide range of test scenarios. To demonstrate practical applicability, we compare conductivities upscaled by numerical homogenization and by our surrogates in two macro-scale problems: computation of equivalent tensors of hydraulic conductivity and prediction of outflow from a constrained 3D area. In both cases, the surrogate-based approach preserves accuracy while substantially reducing computational cost. Surrogate-based upscaling achieves speedups exceeding <span><math><mrow><mn>100</mn><mo>×</mo></mrow></math></span> when inference is performed on a GPU.</div></div>","PeriodicalId":55221,"journal":{"name":"Computers & Geosciences","volume":"209 ","pages":"Article 106105"},"PeriodicalIF":4.4,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145980333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-13DOI: 10.1016/j.cageo.2026.106122
Yingjie Ma , Gang Gao , Haojie Liu , Xiaoyan Zhai
Natural gamma (GR) and neutron porosity (NPHI) have been proven to be effective parameters for identifying and characterizing shale oil reservoirs in borehole geophysical studies. However, prediction of these petrophysical properties from elastic parameters presents considerable difficulties due to the complex nonlinear relationships between measurements of different physical properties of rocks, while relevant research literature remains limited. Conventional approaches typically employ multivariate regression techniques to transform P-wave velocity (Vp), S-wave velocity (Vs), and density (Den/ρ) into GR and NPHI. However, these regression methods are constrained during application due to their adoption of single-point mapping structures and linear mapping relationships, which consequently reduce prediction accuracy. To overcome these limitations, this study proposes an approach that predicts GR and NPHI by leveraging the waveform structures of data within a closed-loop deep learning fusion network. Input samples were constructed from the waveform structure of data to capture the temporal characteristics neglected by single-point inputs, thereby resolving the non-unique mapping to data. To address the complex nonlinear relationships, a fusion network of a Convolutional Neural Network (CNN), a Bidirectional Gated Recurrent Unit (BiGRU), and an attention mechanism was established (STABGCN). This network design enables the extraction of sequential features along well trajectory, captures spatial patterns across the entire field area, and optimizes feature weighting through attention mechanisms. A closed-loop deep learning network was constructed and trained using a semi-supervised learning approach to improve its generalization capability by leveraging the abundance of unlabeled elastic parameter data. The Dongying Sag was selected as the study area for method validation and application, specifically, the feasibility of predicting GR and NPHI based on elastic parameters is systematically evaluated from three perspectives: the correlation between these parameters, the characteristics of their waveform structures, and how forward and inversion deep learning networks perform in prediction accuracy. Numerical and field data validation demonstrated that the proposed method significantly improved prediction accuracy. In model testing, the method proposed in this paper achieved R2 values for GR and NPHI prediction were 0.7857 and 0.89 respectively, confirming the method's effectiveness in enhancing key petrophysical parameter prediction.
{"title":"Prediction of natural gamma and neutron porosity based on waveform structures of elastic parameters using a closed-loop deep learning fusion network","authors":"Yingjie Ma , Gang Gao , Haojie Liu , Xiaoyan Zhai","doi":"10.1016/j.cageo.2026.106122","DOIUrl":"10.1016/j.cageo.2026.106122","url":null,"abstract":"<div><div>Natural gamma (GR) and neutron porosity (NPHI) have been proven to be effective parameters for identifying and characterizing shale oil reservoirs in borehole geophysical studies. However, prediction of these petrophysical properties from elastic parameters presents considerable difficulties due to the complex nonlinear relationships between measurements of different physical properties of rocks, while relevant research literature remains limited. Conventional approaches typically employ multivariate regression techniques to transform P-wave velocity (Vp), S-wave velocity (Vs), and density (Den/ρ) into GR and NPHI. However, these regression methods are constrained during application due to their adoption of single-point mapping structures and linear mapping relationships, which consequently reduce prediction accuracy. To overcome these limitations, this study proposes an approach that predicts GR and NPHI by leveraging the waveform structures of data within a closed-loop deep learning fusion network. Input samples were constructed from the waveform structure of data to capture the temporal characteristics neglected by single-point inputs, thereby resolving the non-unique mapping to data. To address the complex nonlinear relationships, a fusion network of a Convolutional Neural Network (CNN), a Bidirectional Gated Recurrent Unit (BiGRU), and an attention mechanism was established (STABGCN). This network design enables the extraction of sequential features along well trajectory, captures spatial patterns across the entire field area, and optimizes feature weighting through attention mechanisms. A closed-loop deep learning network was constructed and trained using a semi-supervised learning approach to improve its generalization capability by leveraging the abundance of unlabeled elastic parameter data. The Dongying Sag was selected as the study area for method validation and application, specifically, the feasibility of predicting GR and NPHI based on elastic parameters is systematically evaluated from three perspectives: the correlation between these parameters, the characteristics of their waveform structures, and how forward and inversion deep learning networks perform in prediction accuracy. Numerical and field data validation demonstrated that the proposed method significantly improved prediction accuracy. In model testing, the method proposed in this paper achieved R<sup>2</sup> values for GR and NPHI prediction were 0.7857 and 0.89 respectively, confirming the method's effectiveness in enhancing key petrophysical parameter prediction.</div></div>","PeriodicalId":55221,"journal":{"name":"Computers & Geosciences","volume":"209 ","pages":"Article 106122"},"PeriodicalIF":4.4,"publicationDate":"2026-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145980332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-12DOI: 10.1016/j.cageo.2025.106102
Niels B. Christensen , Anders V. Christiansen , Esben Auken , Nikolaj Foged
In this paper, an open source FORTRAN subroutine for the calculation of transient electromagnetic responses of one-dimensional earth models in the quasi-static approximation is presented. The code accommodates the most common TEM instruments configurations used today and includes the modelling of the effect of the system response, i.e. the influence of instrument properties on the responses. It also provides an option for calculating the derivatives of the response with regard to the model parameters of a one-dimensional earth model. Furthermore, induced polarisation effects can be included in the forward responses. The paper presents the considerations behind its creation and outlines the details of the computational elements used in its realisation. The subroutine is intended as a building block that can be included in other programs, specifically as an external computational resource for speeding up calculations in higher order language modelling and inversion codes.
{"title":"An open source FORTRAN subroutine for calculation of TEM responses and derivatives from 1D models","authors":"Niels B. Christensen , Anders V. Christiansen , Esben Auken , Nikolaj Foged","doi":"10.1016/j.cageo.2025.106102","DOIUrl":"10.1016/j.cageo.2025.106102","url":null,"abstract":"<div><div>In this paper, an open source FORTRAN subroutine for the calculation of transient electromagnetic responses of one-dimensional earth models in the quasi-static approximation is presented. The code accommodates the most common TEM instruments configurations used today and includes the modelling of the effect of the system response, i.e. the influence of instrument properties on the responses. It also provides an option for calculating the derivatives of the response with regard to the model parameters of a one-dimensional earth model. Furthermore, induced polarisation effects can be included in the forward responses. The paper presents the considerations behind its creation and outlines the details of the computational elements used in its realisation. The subroutine is intended as a building block that can be included in other programs, specifically as an external computational resource for speeding up calculations in higher order language modelling and inversion codes.</div></div>","PeriodicalId":55221,"journal":{"name":"Computers & Geosciences","volume":"209 ","pages":"Article 106102"},"PeriodicalIF":4.4,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145980491","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present an open-source python framework GravCHAW (Gravimetric Coupled Hydro Assimilation Workflow) for the assimilation of time-lapse gravimetry (TLG) data into numerical groundwater models. This framework enables quantitative exploration of the full potential of TLG in reducing hydrogeological data gaps. TLG is a non-invasive geophysical method that can be used to monitor spatiotemporal variability of groundwater storage changes. At the software’s core is a site-independent coupled hydrogravimetric model that accurately simulates TLG data. Using a range of advanced optimization and uncertainty analysis approaches in a Bayesian context, built around the hydrogravimetric model, the framework assimilates TLG data to estimate parameters, make predictions, and quantify uncertainty across diverse problem scales. In doing so, it accounts for both parameter priors and observation uncertainty, enabling a probabilistic uncertainty analysis. The framework can perform a coupled hydrogravimetric inversion assimilating TLG data individually or jointly with hydrological observations. To illustrate some of the core capacities of the framework, we apply it to a simple groundwater model and explore the propagation of observation uncertainty to parameter and model predictions. The results show that TLG can accurately estimate model parameters and significantly reduce uncertainty in parameters and predictions, both when assimilated individually and jointly with hydraulic head data, provided that the signal-to-noise (SNR) is sufficiently high. In this condition, while joint assimilation results in greater uncertainty reduction in our example case, TLG appears to have the most substantial contribution. GravCHAW will enable the reduction of uncertainty in groundwater models by integrating TLG data, which will be particularly impactful in data-poor situations.
{"title":"GravCHAW: A software framework for the assimilation of time-lapse gravimetry data in groundwater models","authors":"Nazanin Mohammadi , Hamzeh Mohammadigheymasi , Landon J.S. Halloran","doi":"10.1016/j.cageo.2026.106118","DOIUrl":"10.1016/j.cageo.2026.106118","url":null,"abstract":"<div><div>We present an open-source <em>python</em> framework <em>GravCHAW (Gravimetric Coupled Hydro Assimilation Workflow)</em> for the assimilation of time-lapse gravimetry (TLG) data into numerical groundwater models. This framework enables quantitative exploration of the full potential of TLG in reducing hydrogeological data gaps. TLG is a non-invasive geophysical method that can be used to monitor spatiotemporal variability of groundwater storage changes. At the software’s core is a site-independent coupled hydrogravimetric model that accurately simulates TLG data. Using a range of advanced optimization and uncertainty analysis approaches in a Bayesian context, built around the hydrogravimetric model, the framework assimilates TLG data to estimate parameters, make predictions, and quantify uncertainty across diverse problem scales. In doing so, it accounts for both parameter priors and observation uncertainty, enabling a probabilistic uncertainty analysis. The framework can perform a coupled hydrogravimetric inversion assimilating TLG data individually or jointly with hydrological observations. To illustrate some of the core capacities of the framework, we apply it to a simple groundwater model and explore the propagation of observation uncertainty to parameter and model predictions. The results show that TLG can accurately estimate model parameters and significantly reduce uncertainty in parameters and predictions, both when assimilated individually and jointly with hydraulic head data, provided that the signal-to-noise (SNR) is sufficiently high. In this condition, while joint assimilation results in greater uncertainty reduction in our example case, TLG appears to have the most substantial contribution. <em>GravCHAW</em> will enable the reduction of uncertainty in groundwater models by integrating TLG data, which will be particularly impactful in data-poor situations.</div></div>","PeriodicalId":55221,"journal":{"name":"Computers & Geosciences","volume":"209 ","pages":"Article 106118"},"PeriodicalIF":4.4,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145980331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-09DOI: 10.1016/j.cageo.2026.106104
Clara Boutreux , Patrick Brockmann , Mary Elliot , Matthieu Carré , Marc Gosselin
Most natural paleoclimate archives are accretionary material presenting periodic structures that bear environmental and/or chronological information. Here we present StripesCounter, an open access Python software designed for automated banding detection and measurement. As a study case, 16-year long profiles of daily growth increment measurements were conducted on a modern shell of the giant clam Tridacna gigas. High resolution images of shell thin sections were obtained using a confocal laser scanning microscopy and processed using StripesCounter. We demonstrate that StripesCounter provides highly reproducible and accurate results. The long time series of daily increments indicate that Tridacna gigas growth is strongly modulated by seasonal oceanographic variations, reflecting changes in sea surface temperature, precipitation, and salinity. Notably, growth profiles reveal semi-annual variations related to semi-annual variations in environmental factors, potentially linked to ENSO events. This automated growth increment analysis can be extended to other archives with cyclic structures, including tree rings, corals, and other biogenic or abiotic laminated materials. StripesCounter offers a powerful and accessible tool for generating long high-resolution, temporally explicit datasets, opening new perspectives for investigating rapid environmental changes across diverse ecosystems and geological timescales.
{"title":"StripesCounter: A new image software for increment measurement in paleoclimate archives","authors":"Clara Boutreux , Patrick Brockmann , Mary Elliot , Matthieu Carré , Marc Gosselin","doi":"10.1016/j.cageo.2026.106104","DOIUrl":"10.1016/j.cageo.2026.106104","url":null,"abstract":"<div><div>Most natural paleoclimate archives are accretionary material presenting periodic structures that bear environmental and/or chronological information. Here we present StripesCounter, an open access Python software designed for automated banding detection and measurement. As a study case, 16-year long profiles of daily growth increment measurements were conducted on a modern shell of the giant clam <em>Tridacna gigas.</em> High resolution images of shell thin sections were obtained using a confocal laser scanning microscopy and processed using StripesCounter. We demonstrate that StripesCounter provides highly reproducible and accurate results. The long time series of daily increments indicate that <em>Tridacna gigas</em> growth is strongly modulated by seasonal oceanographic variations, reflecting changes in sea surface temperature, precipitation, and salinity. Notably, growth profiles reveal semi-annual variations related to semi-annual variations in environmental factors, potentially linked to ENSO events. This automated growth increment analysis can be extended to other archives with cyclic structures, including tree rings, corals, and other biogenic or abiotic laminated materials. <em>StripesCounter</em> offers a powerful and accessible tool for generating long high-resolution, temporally explicit datasets, opening new perspectives for investigating rapid environmental changes across diverse ecosystems and geological timescales.</div></div>","PeriodicalId":55221,"journal":{"name":"Computers & Geosciences","volume":"209 ","pages":"Article 106104"},"PeriodicalIF":4.4,"publicationDate":"2026-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145980492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}