基于强化学习和策略迁移的最优井控应用于生产优化和段塞最小化

J. Poort, J. van der Waa, T. Mannucci, P. Shoeibi Omrani
{"title":"基于强化学习和策略迁移的最优井控应用于生产优化和段塞最小化","authors":"J. Poort, J. van der Waa, T. Mannucci, P. Shoeibi Omrani","doi":"10.2118/210277-ms","DOIUrl":null,"url":null,"abstract":"\n Production optimization of oil, gas and geothermal wells suffering from unstable multiphase flow phenomena such as slugging is a challenging task due to their complexity and unpredictable dynamics. In this work, reinforcement learning which is a novel machine learning based control method was applied to find optimum well control strategies to maximize cumulative production while minimizing the negative impact of slugging on the system integrity, allowing for economical, safe, and reliable operation of wells and flowlines. Actor-critic reinforcement learning agents were trained to find the optimal settings for production valve opening and gas lift pressure in order to minimize slugging and maximize oil production. These agents were trained on a data-driven proxy models of two oil wells with different responses to the control actions. Use of such proxy models allowed for faster modelling of the environment while still accurately representing the system’s physical relations. In addition, to further increase the speed of optimization convergence, a policy transfer schem was developed in which a pre-trained agent on a different well was applied and finetuned on a new well. The reinforcement learning agents successfully managed to learn control strategies that improved oil production by up to 17% and reduced slugging effects by 6% when compared to baseline control settings. In addition, using policy transfer, agents converged up to 63% faster than when trained from a random initialization.","PeriodicalId":113697,"journal":{"name":"Day 2 Tue, October 04, 2022","volume":"51 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"An Optimum Well Control Using Reinforcement Learning and Policy Transfer; Application to Production Optimization and Slugging Minimization\",\"authors\":\"J. Poort, J. van der Waa, T. Mannucci, P. Shoeibi Omrani\",\"doi\":\"10.2118/210277-ms\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"\\n Production optimization of oil, gas and geothermal wells suffering from unstable multiphase flow phenomena such as slugging is a challenging task due to their complexity and unpredictable dynamics. In this work, reinforcement learning which is a novel machine learning based control method was applied to find optimum well control strategies to maximize cumulative production while minimizing the negative impact of slugging on the system integrity, allowing for economical, safe, and reliable operation of wells and flowlines. Actor-critic reinforcement learning agents were trained to find the optimal settings for production valve opening and gas lift pressure in order to minimize slugging and maximize oil production. These agents were trained on a data-driven proxy models of two oil wells with different responses to the control actions. Use of such proxy models allowed for faster modelling of the environment while still accurately representing the system’s physical relations. In addition, to further increase the speed of optimization convergence, a policy transfer schem was developed in which a pre-trained agent on a different well was applied and finetuned on a new well. The reinforcement learning agents successfully managed to learn control strategies that improved oil production by up to 17% and reduced slugging effects by 6% when compared to baseline control settings. In addition, using policy transfer, agents converged up to 63% faster than when trained from a random initialization.\",\"PeriodicalId\":113697,\"journal\":{\"name\":\"Day 2 Tue, October 04, 2022\",\"volume\":\"51 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-09-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Day 2 Tue, October 04, 2022\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.2118/210277-ms\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Day 2 Tue, October 04, 2022","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2118/210277-ms","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

由于存在段塞流等不稳定多相流现象,油气井和地热井的生产优化是一项具有挑战性的任务,因为它们的复杂性和不可预测的动力学。在这项工作中,强化学习是一种新颖的基于机器学习的控制方法,用于寻找最佳的井控策略,以最大化累积产量,同时最大限度地减少段塞对系统完整性的负面影响,从而实现井和管线的经济、安全、可靠的运行。通过训练Actor-critic强化学习代理来找到生产阀开度和气举压力的最佳设置,以最小化段塞流并最大化石油产量。这些代理在两口油井的数据驱动代理模型上进行训练,这些油井对控制动作的响应不同。使用这种代理模型可以更快地对环境进行建模,同时仍然准确地表示系统的物理关系。此外,为了进一步提高优化收敛的速度,开发了一种策略转移方案,其中在不同的井中应用预训练的代理,并在新井中进行微调。与基线控制设置相比,强化学习代理成功地学习了控制策略,提高了17%的产油量,减少了6%的段塞效应。此外,使用策略转移,智能体的收敛速度比随机初始化训练快63%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
An Optimum Well Control Using Reinforcement Learning and Policy Transfer; Application to Production Optimization and Slugging Minimization
Production optimization of oil, gas and geothermal wells suffering from unstable multiphase flow phenomena such as slugging is a challenging task due to their complexity and unpredictable dynamics. In this work, reinforcement learning which is a novel machine learning based control method was applied to find optimum well control strategies to maximize cumulative production while minimizing the negative impact of slugging on the system integrity, allowing for economical, safe, and reliable operation of wells and flowlines. Actor-critic reinforcement learning agents were trained to find the optimal settings for production valve opening and gas lift pressure in order to minimize slugging and maximize oil production. These agents were trained on a data-driven proxy models of two oil wells with different responses to the control actions. Use of such proxy models allowed for faster modelling of the environment while still accurately representing the system’s physical relations. In addition, to further increase the speed of optimization convergence, a policy transfer schem was developed in which a pre-trained agent on a different well was applied and finetuned on a new well. The reinforcement learning agents successfully managed to learn control strategies that improved oil production by up to 17% and reduced slugging effects by 6% when compared to baseline control settings. In addition, using policy transfer, agents converged up to 63% faster than when trained from a random initialization.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Economic Yardsticks for the End of Economic Life: Holdback and Its Adjuncts Gas Transport Modeling in Organic-Rich Shales with Nonequilibrium Sorption Kinetics Team Coaching in Oil and Gas Fluid-Pipe Interaction in Horizontal Gas-Liquid Flow A Robust Workflow for Optimizing Drilling/Completion/Frac Design Using Machine Learning and Artificial Intelligence
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1