Implementation of Asynchronous Distributed Gauss-Newton Optimization Algorithms for Uncertainty Quantification by Conditioning to Production Data

IF 3.2 3区 工程技术 Q1 ENGINEERING, PETROLEUM SPE Journal Pub Date : 2023-11-01 DOI:10.2118/210118-pa
Guohua Gao, Horacio Florez, Sean Jost, Shakir Shaikh, Kefei Wang, Jeroen Vink, Carl Blom, Terence J. Wells, Fredrik Saaf
{"title":"Implementation of Asynchronous Distributed Gauss-Newton Optimization Algorithms for Uncertainty Quantification by Conditioning to Production Data","authors":"Guohua Gao, Horacio Florez, Sean Jost, Shakir Shaikh, Kefei Wang, Jeroen Vink, Carl Blom, Terence J. Wells, Fredrik Saaf","doi":"10.2118/210118-pa","DOIUrl":null,"url":null,"abstract":"Summary Previous implementation of the distributed Gauss-Newton (DGN) optimization algorithm ran multiple optimization threads in parallel, employing a synchronous running mode (S-DGN). As a result, it waits for all simulations submitted in each iteration to complete, which may significantly degrade performance because a few simulations may run much longer than others, especially for time-consuming real-field cases. To overcome this limitation and thus improve the DGN optimizer’s execution, we propose two asynchronous DGN (A-DGN) optimization algorithms in this paper. The two A-DGN optimization algorithms are (1) the local-search algorithm (A-DGN-LS) to locate multiple maximum a-posteriori (MAP) estimates and (2) the integrated global-search algorithm with the randomized maximum likelihood (RML) method (A-DGN + RML) to generate hundreds of RML samples in parallel for uncertainty quantification. We propose using batch together with a checking time interval to control the optimization process. The A-DGN optimizers check the status of all running simulations after every checking time frame. The iteration index of each optimization thread is updated dynamically according to its simulation status. Thus, different optimization threads may have different iteration indices in the same batch. A new simulation case is proposed immediately once the simulation of an optimization thread is completed, without waiting for the completion of other simulations. We modified the training data set updating algorithm using each thread’s dynamically updated iteration index to implement the asynchronous running mode. We apply the modified QR decomposition method to estimate the sensitivity matrix at the best solution of each optimization thread by linear interpolation of all or a subset of the training data to avoid the issue of solving a linear system with a singular matrix because of insufficient training data points in early batches. A new simulation case (or search point) is generated by solving the Gauss-Newton (GN) trust-region subproblem (GNTRS) using the estimated sensitivity matrix. We developed a more efficient and robust GNTRS solver using eigenvalue decomposition (EVD). The proposed A-DGN optimization methods are tested and validated on a 2D analytical toy problem and a synthetic history-matching problem and then applied to a real-field deepwater reservoir model. Numerical tests confirm that the proposed A-DGN optimization methods can converge to solutions with matching quality comparable to those obtained by the S-DGN optimizers, saving on the time required for the optimizer to converge by a factor ranging from 1.3 to 2 when compared to the S-DGN optimizer depending on the problem. The new A-DGN optimization algorithms improve efficiency and robustness in solving history-matching or inversion problems, especially for uncertainty quantification of subsurface model parameters and production forecasts of real-field reservoirs by conditioning production data.","PeriodicalId":22252,"journal":{"name":"SPE Journal","volume":"57 5-6","pages":"0"},"PeriodicalIF":3.2000,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"SPE Journal","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2118/210118-pa","RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, PETROLEUM","Score":null,"Total":0}
引用次数: 0

Abstract

Summary Previous implementation of the distributed Gauss-Newton (DGN) optimization algorithm ran multiple optimization threads in parallel, employing a synchronous running mode (S-DGN). As a result, it waits for all simulations submitted in each iteration to complete, which may significantly degrade performance because a few simulations may run much longer than others, especially for time-consuming real-field cases. To overcome this limitation and thus improve the DGN optimizer’s execution, we propose two asynchronous DGN (A-DGN) optimization algorithms in this paper. The two A-DGN optimization algorithms are (1) the local-search algorithm (A-DGN-LS) to locate multiple maximum a-posteriori (MAP) estimates and (2) the integrated global-search algorithm with the randomized maximum likelihood (RML) method (A-DGN + RML) to generate hundreds of RML samples in parallel for uncertainty quantification. We propose using batch together with a checking time interval to control the optimization process. The A-DGN optimizers check the status of all running simulations after every checking time frame. The iteration index of each optimization thread is updated dynamically according to its simulation status. Thus, different optimization threads may have different iteration indices in the same batch. A new simulation case is proposed immediately once the simulation of an optimization thread is completed, without waiting for the completion of other simulations. We modified the training data set updating algorithm using each thread’s dynamically updated iteration index to implement the asynchronous running mode. We apply the modified QR decomposition method to estimate the sensitivity matrix at the best solution of each optimization thread by linear interpolation of all or a subset of the training data to avoid the issue of solving a linear system with a singular matrix because of insufficient training data points in early batches. A new simulation case (or search point) is generated by solving the Gauss-Newton (GN) trust-region subproblem (GNTRS) using the estimated sensitivity matrix. We developed a more efficient and robust GNTRS solver using eigenvalue decomposition (EVD). The proposed A-DGN optimization methods are tested and validated on a 2D analytical toy problem and a synthetic history-matching problem and then applied to a real-field deepwater reservoir model. Numerical tests confirm that the proposed A-DGN optimization methods can converge to solutions with matching quality comparable to those obtained by the S-DGN optimizers, saving on the time required for the optimizer to converge by a factor ranging from 1.3 to 2 when compared to the S-DGN optimizer depending on the problem. The new A-DGN optimization algorithms improve efficiency and robustness in solving history-matching or inversion problems, especially for uncertainty quantification of subsurface model parameters and production forecasts of real-field reservoirs by conditioning production data.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
异步分布式高斯-牛顿优化算法在生产数据不确定性量化中的实现
以往的分布式高斯-牛顿(DGN)优化算法采用同步运行模式(S-DGN),并行运行多个优化线程。因此,它等待每次迭代中提交的所有模拟完成,这可能会显著降低性能,因为一些模拟可能比其他模拟运行的时间长得多,特别是对于耗时的实场情况。为了克服这一限制,从而提高DGN优化器的执行,本文提出了两种异步DGN (A-DGN)优化算法。两种A-DGN优化算法分别是:(1)定位多个最大后验(MAP)估计的局部搜索算法(A-DGN- ls)和(2)与随机最大似然(RML)方法(A-DGN + RML)集成的全局搜索算法(A-DGN + RML),并行生成数百个RML样本进行不确定性量化。我们建议使用批处理和检查时间间隔来控制优化过程。A-DGN优化器在每个检查时间框架之后检查所有运行模拟的状态。每个优化线程的迭代索引根据其仿真状态动态更新。因此,不同的优化线程在同一批处理中可能具有不同的迭代索引。一旦一个优化线程的仿真完成,无需等待其他仿真完成,立即提出一个新的仿真案例。利用每个线程动态更新的迭代索引对训练数据集更新算法进行了改进,实现了异步运行模式。我们采用改进的QR分解方法,通过对全部或部分训练数据进行线性插值来估计每个优化线程最优解处的灵敏度矩阵,避免了由于早期批次训练数据点不足而导致线性系统求解矩阵奇异的问题。利用估计的灵敏度矩阵求解高斯-牛顿(GN)信任域子问题(GNTRS),生成新的仿真案例(或搜索点)。我们利用特征值分解(EVD)开发了一个更有效和鲁棒的GNTRS求解器。在二维解析玩具问题和综合历史匹配问题上对所提出的a - dgn优化方法进行了测试和验证,然后将其应用于实际的深水油藏模型。数值测试证实,所提出的a - dgn优化方法可以收敛到与S-DGN优化器获得的解匹配质量相当的解,与S-DGN优化器相比,优化器收敛所需的时间节省了1.3到2倍,具体取决于问题。新的A-DGN优化算法提高了解决历史匹配或反演问题的效率和鲁棒性,特别是通过调节生产数据来实现地下模型参数的不确定性量化和实际油藏的产量预测。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
SPE Journal
SPE Journal 工程技术-工程:石油
CiteScore
7.20
自引率
11.10%
发文量
229
审稿时长
4.5 months
期刊介绍: Covers theories and emerging concepts spanning all aspects of engineering for oil and gas exploration and production, including reservoir characterization, multiphase flow, drilling dynamics, well architecture, gas well deliverability, numerical simulation, enhanced oil recovery, CO2 sequestration, and benchmarking and performance indicators.
期刊最新文献
Sensitivity Analysis and Comparative Study for Different Detection Modes of Logging-While-Drilling Ultradeep Azimuthal Electromagnetic Tools A Grain Size Profile Prediction Method Based on Combined Model of Extreme Gradient Boosting and Artificial Neural Network and Its Application in Sand Control Design Dynamic Scaling Prediction Model and Application in Near-Wellbore Formation of Ultradeep Natural Gas Reservoirs Injection Temperature Impacts on Reservoir Response during CO2 Storage Virtual Meter with Flow Pattern Recognition Using Deep Learning Neural Networks: Experiments and Analyses
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1