Abstract. Estimating a reliable subsurface resistivity structure using conventional techniques is challenging due to the nonlinear nature of the inverse problems. The performance of the inversion techniques can be pretty ambiguous based on the optimal error, although traditional methods have proven to be quite effective. In this work, the impacts of the constraints accessible from a borehole are examined for further assessment and to enhance algorithm effectivity. The vPSOGWO strategy is a new approach that is based on a model search space without any prior information, and it describes the hybridization of particle swarm optimization (PSO) with the Grey Wolf Optimizer (GWO). To understand the efficiency and novelty of the algorithm, it has been validated on two different kinds of synthetic resistivity data with various sets of noise and, subsequently, applied to three field datasets of different geological terrains. The analyzed results suggest that the subsurface resistivity model shows considerable uncertainty. Thus, it is superior to examine the histograms and posterior probability density functions (PDFs) of such solutions to exemplify the global solution. A PDF with a 68.27 % confidence interval (CI) selects a region with a higher probability. Therefore, the inverted models are used to estimate the mean global solution and the most negligible uncertainties, where the mean global solution represents the best solution. Our vPSOGWO-inverted outcomes have been proven to be more accurate than classic PSO, GWO, and state-of-the-art variants of classic approaches. As a result, this novel method plays a vital role in vertical electrical sounding (VES) data inversion.
{"title":"Stability and uncertainty assessment of geoelectrical resistivity model parameters: a new hybrid metaheuristic algorithm and posterior probability density function approach","authors":"Kuldeep Sarkar, Jit V. Tiwari, Upendra K. Singh","doi":"10.5194/npg-31-7-2024","DOIUrl":"https://doi.org/10.5194/npg-31-7-2024","url":null,"abstract":"Abstract. Estimating a reliable subsurface resistivity structure using conventional techniques is challenging due to the nonlinear nature of the inverse problems. The performance of the inversion techniques can be pretty ambiguous based on the optimal error, although traditional methods have proven to be quite effective. In this work, the impacts of the constraints accessible from a borehole are examined for further assessment and to enhance algorithm effectivity. The vPSOGWO strategy is a new approach that is based on a model search space without any prior information, and it describes the hybridization of particle swarm optimization (PSO) with the Grey Wolf Optimizer (GWO). To understand the efficiency and novelty of the algorithm, it has been validated on two different kinds of synthetic resistivity data with various sets of noise and, subsequently, applied to three field datasets of different geological terrains. The analyzed results suggest that the subsurface resistivity model shows considerable uncertainty. Thus, it is superior to examine the histograms and posterior probability density functions (PDFs) of such solutions to exemplify the global solution. A PDF with a 68.27 % confidence interval (CI) selects a region with a higher probability. Therefore, the inverted models are used to estimate the mean global solution and the most negligible uncertainties, where the mean global solution represents the best solution. Our vPSOGWO-inverted outcomes have been proven to be more accurate than classic PSO, GWO, and state-of-the-art variants of classic approaches. As a result, this novel method plays a vital role in vertical electrical sounding (VES) data inversion.","PeriodicalId":54714,"journal":{"name":"Nonlinear Processes in Geophysics","volume":"151 1","pages":""},"PeriodicalIF":2.2,"publicationDate":"2024-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139420492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Laila Zafar Kahlon, Hassan Amir Shah, Tamaz David Kaladze, Qura Tul Ain, Syed Assad Bukhari
Abstract not available
无摘要
{"title":"Brief Communication: A modified Korteweg–de Vries equation for Rossby–Khantadze waves in a sheared zonal flow of the ionospheric E layer","authors":"Laila Zafar Kahlon, Hassan Amir Shah, Tamaz David Kaladze, Qura Tul Ain, Syed Assad Bukhari","doi":"10.5194/npg-31-1-2024","DOIUrl":"https://doi.org/10.5194/npg-31-1-2024","url":null,"abstract":"Abstract not available","PeriodicalId":54714,"journal":{"name":"Nonlinear Processes in Geophysics","volume":"21 1","pages":""},"PeriodicalIF":2.2,"publicationDate":"2024-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139414639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Qiu Ping Lu, Cai Ping Wu, Hui Chen, Xiao Chang Chen, San Qiu Liu
Abstract. The dynamics of ion holes (IHs) in plasmas where electrons follow the regularized Kappa distribution (RKD) and ions follow the Maxwellian distribution (MD) are investigated based on the Bernstein-Greene-Kruskal (BGK) method. The results show that the depth of the IHs, the allowed combination of width and amplitude to support physically plausible IHs equilibrium depend on the spectral index κe and cut-off parameter α of the distribution function. That is, with increasing values of the spectral index κe and cut-off parameter α, the IHs formed become deeper and allow a larger permissible region of width and amplitude. In contrast, with decreasing values of the spectral index κe and cut-off parameter α, the IHs formed become shallower and have a smaller allowed range of width and amplitude. The present work may contribute to the comprehension of the nonlinear structures in plasmas system where non-thermal particles are found.
摘要基于伯恩斯坦-格林-克鲁斯卡尔(BGK)方法,研究了电子服从正则化卡帕分布(RKD)和离子服从麦克斯韦分布(MD)的等离子体中离子洞(IHs)的动力学。结果表明,IHs 的深度、支持物理上可信的 IHs 平衡所允许的宽度和振幅组合取决于分布函数的光谱指数 κe 和截止参数 α。也就是说,随着光谱指数 κe 和截止参数 α 值的增加,形成的 IH 会变得更深,允许的宽度和振幅区域也会更大。相反,随着光谱指数κe 和截止参数 α 值的减小,形成的 IH 越浅,允许的宽度和振幅范围越小。本研究可能有助于理解存在非热粒子的等离子体系统中的非线性结构。
{"title":"The dynamic of ion Bernstein-Greene-Kruskal holes in plasmas with regularized κ-distributed electrons","authors":"Qiu Ping Lu, Cai Ping Wu, Hui Chen, Xiao Chang Chen, San Qiu Liu","doi":"10.5194/npg-2023-25","DOIUrl":"https://doi.org/10.5194/npg-2023-25","url":null,"abstract":"<strong>Abstract.</strong> The dynamics of ion holes (IHs) in plasmas where electrons follow the regularized Kappa distribution (RKD) and ions follow the Maxwellian distribution (MD) are investigated based on the Bernstein-Greene-Kruskal (BGK) method. The results show that the depth of the IHs, the allowed combination of width and amplitude to support physically plausible IHs equilibrium depend on the spectral index κ<sub>e</sub> and cut-off parameter α of the distribution function. That is, with increasing values of the spectral index κ<sub>e</sub> and cut-off parameter α, the IHs formed become deeper and allow a larger permissible region of width and amplitude. In contrast, with decreasing values of the spectral index κ<sub>e</sub> and cut-off parameter α, the IHs formed become shallower and have a smaller allowed range of width and amplitude. The present work may contribute to the comprehension of the nonlinear structures in plasmas system where non-thermal particles are found.","PeriodicalId":54714,"journal":{"name":"Nonlinear Processes in Geophysics","volume":"38 1","pages":""},"PeriodicalIF":2.2,"publicationDate":"2024-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139396708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Florian Beiser, Håvard Heitlo Holm, Kjetil Olsen Lye, Jo Eidsvik
Abstract. Multi-level Monte Carlo methods have established as a tool in uncertainty quantification for decreasing the computational costs while maintaining the same statistical accuracy as in single-level Monte Carlo. Lately, there have also been theoretical efforts to use similar ideas to facilitate multi-level data assimilation. By applying a multi-level ensemble Kalman filter for assimilating sparse observations of ocean currents into a simplified ocean model based on the shallow-water equations, we study the practical challenges of applying these method to more complex problems. We present numerical results from a realistic test case where small-scale perturbations lead to chaotic behaviour, and in this context we conduct state estimation and drift trajectories forecasting using multi-level ensembles. This represents a new step on the path of making multi-level data assimilation feasible for real-world oceanographic applications.
{"title":"Multi-level data assimilation for simplified ocean models","authors":"Florian Beiser, Håvard Heitlo Holm, Kjetil Olsen Lye, Jo Eidsvik","doi":"10.5194/npg-2023-27","DOIUrl":"https://doi.org/10.5194/npg-2023-27","url":null,"abstract":"<strong>Abstract.</strong> Multi-level Monte Carlo methods have established as a tool in uncertainty quantification for decreasing the computational costs while maintaining the same statistical accuracy as in single-level Monte Carlo. Lately, there have also been theoretical efforts to use similar ideas to facilitate multi-level data assimilation. By applying a multi-level ensemble Kalman filter for assimilating sparse observations of ocean currents into a simplified ocean model based on the shallow-water equations, we study the practical challenges of applying these method to more complex problems. We present numerical results from a realistic test case where small-scale perturbations lead to chaotic behaviour, and in this context we conduct state estimation and drift trajectories forecasting using multi-level ensembles. This represents a new step on the path of making multi-level data assimilation feasible for real-world oceanographic applications.","PeriodicalId":54714,"journal":{"name":"Nonlinear Processes in Geophysics","volume":"208 1","pages":""},"PeriodicalIF":2.2,"publicationDate":"2024-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139095111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lilian Vanderveken, Marina Martínez Montero, Michel Crucifix
Abstract. The Rietkerk vegetation model is a system of partial differential equations, which has been used to understand the formation and dynamics of spatial patterns in vegetation ecosystems, including desertification and biodiversity loss. Here, we provide an in-depth bifurcation analysis of the vegetation patterns produced by Rietkerk's model, based on a linear stability analysis of the homogeneous equilibrium of the system. Specifically, using a continuation method based on the Newton–Raphson algorithm, we obtain all the main heterogeneous equilibria for a given size of the domain. We confirm that inhomogeneous vegetated states can exist and be stable, even for a value of rainfall for which no vegetation exists in the non-spatialized system. In addition, we evidence the existence of a new type of equilibrium, which we call “mixed state”, in which the equilibria are always unstable and take the form of a mix of two equilibria from the main branches. Although these equilibria are unstable, they influence the dynamics of the transitions between distinct stable states by slowing down the evolution of the system when it passes close to it. Our approach proves to be a helpful way to assess the existence of tipping points in spatially extended systems and disentangle the fate of the system in the Busse balloon. Overall, our findings represent a significant step forward in understanding the behaviour of the Rietkerk model and the broader dynamics of vegetation patterns.
{"title":"Existence and influence of mixed states in a model of vegetation patterns","authors":"Lilian Vanderveken, Marina Martínez Montero, Michel Crucifix","doi":"10.5194/npg-30-585-2023","DOIUrl":"https://doi.org/10.5194/npg-30-585-2023","url":null,"abstract":"Abstract. The Rietkerk vegetation model is a system of partial differential equations, which has been used to understand the formation and dynamics of spatial patterns in vegetation ecosystems, including desertification and biodiversity loss. Here, we provide an in-depth bifurcation analysis of the vegetation patterns produced by Rietkerk's model, based on a linear stability analysis of the homogeneous equilibrium of the system. Specifically, using a continuation method based on the Newton–Raphson algorithm, we obtain all the main heterogeneous equilibria for a given size of the domain. We confirm that inhomogeneous vegetated states can exist and be stable, even for a value of rainfall for which no vegetation exists in the non-spatialized system. In addition, we evidence the existence of a new type of equilibrium, which we call “mixed state”, in which the equilibria are always unstable and take the form of a mix of two equilibria from the main branches. Although these equilibria are unstable, they influence the dynamics of the transitions between distinct stable states by slowing down the evolution of the system when it passes close to it. Our approach proves to be a helpful way to assess the existence of tipping points in spatially extended systems and disentangle the fate of the system in the Busse balloon. Overall, our findings represent a significant step forward in understanding the behaviour of the Rietkerk model and the broader dynamics of vegetation patterns.","PeriodicalId":54714,"journal":{"name":"Nonlinear Processes in Geophysics","volume":"10 1","pages":""},"PeriodicalIF":2.2,"publicationDate":"2023-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138579489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract. The effusive–explosive energy emission process in a volcano is a dynamic and complex physical phenomenon. The importance of quantifying this complexity in terms of the physical and mathematical mechanisms that govern these emissions should be a requirement for deciding to apply a possible forecasting strategy with a sufficient degree of certainty. The complexity of this process is determined in this research by means of the reconstruction theorem and statistical procedures applied to the effusive–explosive volcanic energy emissions corresponding to the activity in the Volcán de Colima (western segment of the Trans-Mexican Volcanic Belt) along the years 2013–2015. The analysis is focused on measuring the degree of persistence or randomness of the series, the degree of predictability of energy emissions, and the quantification of the degree of complexity and “memory loss” of the physical mechanism throughout an episode of volcanic emissions. The results indicate that the analysed time series depict a high degree of persistence and low memory loss, making the mentioned effusive–explosive volcanic emission structure a candidate for successfully applying a forecasting strategy.
{"title":"Uncertainties, complexities and possible forecasting of Volcán de Colima energy emissions (Mexico, years 2013–2015) based on a fractal reconstruction theorem","authors":"Marisol Monterrubio-Velasco, Xavier Lana, Raúl Arámbula-Mendoza","doi":"10.5194/npg-30-571-2023","DOIUrl":"https://doi.org/10.5194/npg-30-571-2023","url":null,"abstract":"Abstract. The effusive–explosive energy emission process in a volcano is a dynamic and complex physical phenomenon. The importance of quantifying this complexity in terms of the physical and mathematical mechanisms that govern these emissions should be a requirement for deciding to apply a possible forecasting strategy with a sufficient degree of certainty. The complexity of this process is determined in this research by means of the reconstruction theorem and statistical procedures applied to the effusive–explosive volcanic energy emissions corresponding to the activity in the Volcán de Colima (western segment of the Trans-Mexican Volcanic Belt) along the years 2013–2015. The analysis is focused on measuring the degree of persistence or randomness of the series, the degree of predictability of energy emissions, and the quantification of the degree of complexity and “memory loss” of the physical mechanism throughout an episode of volcanic emissions. The results indicate that the analysed time series depict a high degree of persistence and low memory loss, making the mentioned effusive–explosive volcanic emission structure a candidate for successfully applying a forecasting strategy.","PeriodicalId":54714,"journal":{"name":"Nonlinear Processes in Geophysics","volume":"276 2 1","pages":""},"PeriodicalIF":2.2,"publicationDate":"2023-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138553623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-05DOI: 10.5194/egusphere-2023-2755
Marc Bocquet, Pierre J. Vanderbecken, Alban Farchi, Joffrey Dumont Le Brazidec, Yelva Roustan
Abstract. Because optimal transport acts as displacement interpolation in physical space rather than as interpolation in value space, it can potentially avoid double penalty errors. As such it provides a very attractive metric for non-negative physical fields comparison – the Wasserstein distance – which could further be used in data assimilation for the geosciences. The algorithmic and numerical implementations of such distance are however not straightforward. Moreover, its theoretical formulation within typical data assimilation problems face conceptual challenges, resulting in scarce contributions on the topic in the literature. We formulate the problem in a way that offers a unified view on both classical data assimilation and optimal transport. The resulting OTDA framework accounts for both the classical source of prior errors, background and observation, together with a Wasserstein barycentre in between states that stand for these background and observation. We show that the hybrid OTDA analysis can be decomposed as a simpler OTDA problem involving a single Wasserstein distance, followed by a Wasserstein barycentre problem which ignores the prior errors and can be seen as a McCann interpolant. We also propose a less enlightening but straightforward solution to the full OTDA problem, which includes the derivation of its analysis error covariance matrix. Thanks to these theoretical developments, we are able to extend the classical 3D-Var/BLUE paradigm at the core of most classical data assimilation schemes. The resulting formalism is very flexible and can account for sparse, noisy observations and non-Gaussian error statistics. It is illustrated by simple one– and two–dimensional examples that show the richness of the new types of analysis offered by this unification.
{"title":"Bridging classical data assimilation and optimal transport","authors":"Marc Bocquet, Pierre J. Vanderbecken, Alban Farchi, Joffrey Dumont Le Brazidec, Yelva Roustan","doi":"10.5194/egusphere-2023-2755","DOIUrl":"https://doi.org/10.5194/egusphere-2023-2755","url":null,"abstract":"<strong>Abstract.</strong> Because optimal transport acts as displacement interpolation in physical space rather than as interpolation in value space, it can potentially avoid double penalty errors. As such it provides a very attractive metric for non-negative physical fields comparison – the Wasserstein distance – which could further be used in data assimilation for the geosciences. The algorithmic and numerical implementations of such distance are however not straightforward. Moreover, its theoretical formulation within typical data assimilation problems face conceptual challenges, resulting in scarce contributions on the topic in the literature. We formulate the problem in a way that offers a unified view on both classical data assimilation and optimal transport. The resulting <em>OTDA</em> framework accounts for both the classical source of prior errors, background and observation, together with a Wasserstein barycentre in between states that stand for these background and observation. We show that the hybrid OTDA analysis can be decomposed as a simpler OTDA problem involving a single Wasserstein distance, followed by a Wasserstein barycentre problem which ignores the prior errors and can be seen as a <em>McCann interpolant</em>. We also propose a less enlightening but straightforward solution to the full OTDA problem, which includes the derivation of its analysis error covariance matrix. Thanks to these theoretical developments, we are able to extend the classical 3D-Var/BLUE paradigm at the core of most classical data assimilation schemes. The resulting formalism is very flexible and can account for sparse, noisy observations and non-Gaussian error statistics. It is illustrated by simple one– and two–dimensional examples that show the richness of the new types of analysis offered by this unification.","PeriodicalId":54714,"journal":{"name":"Nonlinear Processes in Geophysics","volume":"214 1","pages":""},"PeriodicalIF":2.2,"publicationDate":"2023-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138531986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-01DOI: 10.5194/egusphere-2023-2649
Pierre Le Bras, Florian Sévellec, Pierre Tandeo, Juan Ruiz, Pierre Ailliot
Abstract. In geosciences, multi-model ensembles are helpful to explore the robustness of a range of results. To obtain a synthetic and improved representation of the studied dynamic system, the models are usually weighted. The simplest method, namely the model democracy, gives equal weights to all models, while more advanced approaches base weights on agreement with available observations. Here, we focus on determining weights for various versions of an idealized model of Atlantic Meridional Overturning Circulation. This is done by assessing their performance against synthetic observations (generated from one of the model versions) within a data assimilation framework using EnKF. In contrast to traditional data assimilation, we implement data-driven forecasts using the analog method based on catalogs of short-term trajectories. This approach allows us to efficiently emulate the model's dynamics while keeping computational costs low. For each model version, we compute a local performance metric, known as the contextual model evidence, to compare observations and model forecasts. This metric, based on the innovation likelihood, is sensitive to differences in model dynamics and considers forecast and observation uncertainties. Finally, the weights are calculated using both model performance and model codependency, and then evaluated on climatologies of long-term simulations. Results show good performance in identifying numerical simulations that best replicate observed short-term variations. Additionally, it outperforms benchmark approaches such as model democracy or climatologies-based strategies when reconstructing missing distributions. These findings encourage the application of the proposed methodology to more complex datasets in the future, like climate simulations.
{"title":"Selecting and weighting dynamical models using data-driven approaches","authors":"Pierre Le Bras, Florian Sévellec, Pierre Tandeo, Juan Ruiz, Pierre Ailliot","doi":"10.5194/egusphere-2023-2649","DOIUrl":"https://doi.org/10.5194/egusphere-2023-2649","url":null,"abstract":"<strong>Abstract.</strong> In geosciences, multi-model ensembles are helpful to explore the robustness of a range of results. To obtain a synthetic and improved representation of the studied dynamic system, the models are usually weighted. The simplest method, namely the model democracy, gives equal weights to all models, while more advanced approaches base weights on agreement with available observations. Here, we focus on determining weights for various versions of an idealized model of Atlantic Meridional Overturning Circulation. This is done by assessing their performance against synthetic observations (generated from one of the model versions) within a data assimilation framework using EnKF. In contrast to traditional data assimilation, we implement data-driven forecasts using the analog method based on catalogs of short-term trajectories. This approach allows us to efficiently emulate the model's dynamics while keeping computational costs low. For each model version, we compute a local performance metric, known as the contextual model evidence, to compare observations and model forecasts. This metric, based on the innovation likelihood, is sensitive to differences in model dynamics and considers forecast and observation uncertainties. Finally, the weights are calculated using both model performance and model codependency, and then evaluated on climatologies of long-term simulations. Results show good performance in identifying numerical simulations that best replicate observed short-term variations. Additionally, it outperforms benchmark approaches such as model democracy or climatologies-based strategies when reconstructing missing distributions. These findings encourage the application of the proposed methodology to more complex datasets in the future, like climate simulations.","PeriodicalId":54714,"journal":{"name":"Nonlinear Processes in Geophysics","volume":"175 1","pages":""},"PeriodicalIF":2.2,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138531982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract. Near-surface winds over complex terrain generally feature a large variability at the local scale. Forecasting these winds requires high-resolution numerical weather prediction (NWP) models, which drastically increase the duration of simulations and hinder them in running on a routine basis. Nevertheless, downscaling methods can help in forecasting such wind flows at limited numerical cost. In this study, we present a statistical downscaling of WRF (Weather Research and Forecasting) wind forecasts over southeastern France (including the southwestern part of the Alps) from its original 9 km resolution onto a 1 km resolution grid (1 km NWP model outputs are used to fit our statistical models). Downscaling is performed using convolutional neural networks (CNNs), which are the most powerful machine learning tool for processing images or any kind of gridded data, as demonstrated by recent studies dealing with wind forecast downscaling. The previous studies mostly focused on testing new model architectures. In this study, we aimed to extend these works by exploring different output variables and their associated loss function. We found that there is no one approach that outperforms the others in terms of both the direction and the speed at the same time. Finally, the best overall performance is obtained by combining two CNNs, one dedicated to the direction forecast based on the calculation of the normalized wind components using a customized mean squared error (MSE) loss function and the other dedicated to the speed forecast based on the calculation of the wind components and using another customized MSE loss function. Local-scale, topography-related wind features, which were poorly forecast at 9 km, are now well reproduced, both for speed (e.g., acceleration on the ridge, leeward deceleration, sheltering in valleys) and direction (deflection, valley channeling). There is a general improvement in the forecast, especially during the nighttime stable stratification period, which is the most difficult period to forecast. The result is that, after downscaling, the wind speed bias is reduced from −0.55 to −0.01 m s−1, the wind speed MAE is reduced from 1.02 to 0.69 m s−1 (32 % reduction) and the wind direction MAE is reduced from 25.9 to 15.5∘ (40 % reduction) in comparison with the 9 km resolution forecast.
摘要。复杂地形上的近地面风在局地尺度上通常具有较大的变异性。预测这些风需要高分辨率的数值天气预报(NWP)模型,这大大增加了模拟的持续时间,并阻碍了它们在常规基础上的运行。然而,降尺度方法可以在有限的数值成本下帮助预测这种风的流动。在这项研究中,我们提出了一个WRF(天气研究与预报)在法国东南部(包括阿尔卑斯山西南部)的风预报的统计降尺度,从原来的9公里分辨率降至1公里分辨率网格(1公里NWP模型输出用于拟合我们的统计模型)。卷积神经网络(cnn)是处理图像或任何网格数据的最强大的机器学习工具,正如最近处理风预报降尺度的研究所证明的那样。以前的研究主要集中在测试新的模型架构上。在本研究中,我们旨在通过探索不同的输出变量及其相关的损失函数来扩展这些工作。我们发现,没有一种方法能同时在方向和速度上优于其他方法。最后,结合两个cnn获得最佳的综合性能,一个cnn使用自定义的均方误差(MSE)损失函数计算归一化风分量,用于方向预测;另一个cnn使用自定义的均方误差(MSE)损失函数计算风分量,用于速度预测。与地形相关的局地尺度的风特征,在9公里处预报得很差,现在可以很好地再现,包括速度(例如,山脊上的加速,背风减速,山谷中的遮蔽)和方向(偏转,山谷通道)。预报总体上有所改善,尤其是夜间稳定分层期,这是最难预报的时期。结果是,在降尺度后,与9公里分辨率预报相比,风速偏差从−0.55 m s−1减少到−0.01 m s−1,风速MAE从1.02 m s−1减少到0.69 m s−1(减少32%),风向MAE从25.9°减少到15.5°(减少40%)。
{"title":"Downscaling of surface wind forecasts using convolutional neural networks","authors":"Florian Dupuy, Pierre Durand, Thierry Hedde","doi":"10.5194/npg-30-553-2023","DOIUrl":"https://doi.org/10.5194/npg-30-553-2023","url":null,"abstract":"Abstract. Near-surface winds over complex terrain generally feature a large variability at the local scale. Forecasting these winds requires high-resolution numerical weather prediction (NWP) models, which drastically increase the duration of simulations and hinder them in running on a routine basis. Nevertheless, downscaling methods can help in forecasting such wind flows at limited numerical cost. In this study, we present a statistical downscaling of WRF (Weather Research and Forecasting) wind forecasts over southeastern France (including the southwestern part of the Alps) from its original 9 km resolution onto a 1 km resolution grid (1 km NWP model outputs are used to fit our statistical models). Downscaling is performed using convolutional neural networks (CNNs), which are the most powerful machine learning tool for processing images or any kind of gridded data, as demonstrated by recent studies dealing with wind forecast downscaling. The previous studies mostly focused on testing new model architectures. In this study, we aimed to extend these works by exploring different output variables and their associated loss function. We found that there is no one approach that outperforms the others in terms of both the direction and the speed at the same time. Finally, the best overall performance is obtained by combining two CNNs, one dedicated to the direction forecast based on the calculation of the normalized wind components using a customized mean squared error (MSE) loss function and the other dedicated to the speed forecast based on the calculation of the wind components and using another customized MSE loss function. Local-scale, topography-related wind features, which were poorly forecast at 9 km, are now well reproduced, both for speed (e.g., acceleration on the ridge, leeward deceleration, sheltering in valleys) and direction (deflection, valley channeling). There is a general improvement in the forecast, especially during the nighttime stable stratification period, which is the most difficult period to forecast. The result is that, after downscaling, the wind speed bias is reduced from −0.55 to −0.01 m s−1, the wind speed MAE is reduced from 1.02 to 0.69 m s−1 (32 % reduction) and the wind direction MAE is reduced from 25.9 to 15.5∘ (40 % reduction) in comparison with the 9 km resolution forecast.","PeriodicalId":54714,"journal":{"name":"Nonlinear Processes in Geophysics","volume":"214 1","pages":""},"PeriodicalIF":2.2,"publicationDate":"2023-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138531955","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-28DOI: 10.5194/egusphere-2023-2699
Man-Yau Chan
Abstract. Small forecast ensemble sizes (< 100) are common in the ensemble data assimilation (EnsDA) component of geophysical forecast systems, thus limiting the error-constraining power of EnsDA. This study proposes an efficient and embarrassingly parallel method to generate additional ensemble members: the Probit-space Ensemble Size Expansion for Gaussian Copulas (PESE-GC; "peace gee see"). Such members are called "virtual members". PESE-GC utilizes the users' knowledge of the marginal distributions of forecast model variables. Virtual members can be generated from any (potentially non-Gaussian) multivariate forecast distribution that has a Gaussian copula. PESE-GC's impact on EnsDA is evaluated using the 40-variable Lorenz 1996 model, several EnsDA algorithms, several observation operators, a range of EnsDA cycling intervals and a range of forecast ensemble sizes. Significant improvements to EnsDA (p < 0.01) are observed when either 1) the forecast ensemble size is small (≤20 members), 2) the user selects marginal distributions that improves the forecast model variable statistics, and/or 3) the rank histogram filter is used with non-parametric priors in high forecast spread situations. These results motivate development and testing of PESE-GC for EnsDA with high-order geophysical models.
{"title":"Improving Ensemble Data Assimilation through Probit-space Ensemble Size Expansion for Gaussian Copulas (PESE-GC)","authors":"Man-Yau Chan","doi":"10.5194/egusphere-2023-2699","DOIUrl":"https://doi.org/10.5194/egusphere-2023-2699","url":null,"abstract":"<strong>Abstract.</strong> Small forecast ensemble sizes (< 100) are common in the ensemble data assimilation (EnsDA) component of geophysical forecast systems, thus limiting the error-constraining power of EnsDA. This study proposes an efficient and embarrassingly parallel method to generate additional ensemble members: the Probit-space Ensemble Size Expansion for Gaussian Copulas (PESE-GC; \"peace gee see\"). Such members are called \"virtual members\". PESE-GC utilizes the users' knowledge of the marginal distributions of forecast model variables. Virtual members can be generated from any (potentially non-Gaussian) multivariate forecast distribution that has a Gaussian copula. PESE-GC's impact on EnsDA is evaluated using the 40-variable Lorenz 1996 model, several EnsDA algorithms, several observation operators, a range of EnsDA cycling intervals and a range of forecast ensemble sizes. Significant improvements to EnsDA (<em>p</em> < 0.01) are observed when either 1) the forecast ensemble size is small (≤20 members), 2) the user selects marginal distributions that improves the forecast model variable statistics, and/or 3) the rank histogram filter is used with non-parametric priors in high forecast spread situations. These results motivate development and testing of PESE-GC for EnsDA with high-order geophysical models.","PeriodicalId":54714,"journal":{"name":"Nonlinear Processes in Geophysics","volume":"72 1","pages":""},"PeriodicalIF":2.2,"publicationDate":"2023-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138531985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}