Mohammed Amr Aly, P. Anastasi, G. Fighera, Ernesto Della Rossa
{"title":"Ensemble Data Analytics Approaches for Fast Parametrization Screening and Validation","authors":"Mohammed Amr Aly, P. Anastasi, G. Fighera, Ernesto Della Rossa","doi":"10.2118/207215-ms","DOIUrl":null,"url":null,"abstract":"\n Ensemble approaches are increasingly used for history matching also with large scale models. However, the iterative nature and the high computational resources required, demands careful and consistent parameterization of the initial ensemble of models, to avoid repeated and time-consuming attempts before an acceptable match is achieved. The objective of this work is to introduce ensemble-based data analytic techniques to validate the starting ensemble and early identify potential parameterization problems, with significant time saving.\n These techniques are based on the same definition of the mismatch between the initial ensemble simulation results and the historical data used by ensemble algorithms. In fact, a notion of distance among ensemble realizations can be introduced using the mismatch, opening the possibility to use statistical analytic techniques like Multi-Dimensional Scaling and Generalized Sensitivity. In this way a clear and immediate view of ensemble behavior can be quickly explored. Combining these views with advanced correlation analysis, a fast assessment of ensemble consistency with observed data and physical understanding of the reservoir is then possible.\n The application of the proposed methodology to real cases of ensemble history matching studies, shows that the approach is very effective in identifying if a specific initial ensemble has an adequate parameterization to start a successful computational loop of data assimilation. Insufficient variability, due to a poor capturing of the reservoir performance, can be investigated both at field and well scales by data analytics computations. The information contained in ensemble mismatches of relevant quantities like water-breakthrough and Gas-Oil-ratio is then evaluated in a systematic way. The analysis often reveals where and which uncertainties have not enough variability to explain historical data. It also allows to detect what is the role of apparently inconsistent parameters. In principle it is possible to activate the heavy iterative computation also with an initial ensemble where the analytics tools show potential difficulties and problems. However, experiences with large scale models point out that the possibility to obtain a good match in these situations is very low, leading to a time-consuming revision of the entire process. On the contrary, if the ensemble is validated, the iterative large-scale computations achieve a good calibration with a consistency that enables predictive ability.\n As a new interesting feature of the proposed methodology, ensemble advanced data analytics techniques are able to give clues and suggestions regarding which parameters could be source of potential history matching problems in advance. In this way it is possible anticipate directly on initial ensemble the uncertainties revision for example modifying ranges, introducing new parameters and better tuning other ensemble factors, like localization and observations tolerances that controls the ultimate match quality.","PeriodicalId":10959,"journal":{"name":"Day 3 Wed, November 17, 2021","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2021-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Day 3 Wed, November 17, 2021","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2118/207215-ms","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Ensemble approaches are increasingly used for history matching also with large scale models. However, the iterative nature and the high computational resources required, demands careful and consistent parameterization of the initial ensemble of models, to avoid repeated and time-consuming attempts before an acceptable match is achieved. The objective of this work is to introduce ensemble-based data analytic techniques to validate the starting ensemble and early identify potential parameterization problems, with significant time saving.
These techniques are based on the same definition of the mismatch between the initial ensemble simulation results and the historical data used by ensemble algorithms. In fact, a notion of distance among ensemble realizations can be introduced using the mismatch, opening the possibility to use statistical analytic techniques like Multi-Dimensional Scaling and Generalized Sensitivity. In this way a clear and immediate view of ensemble behavior can be quickly explored. Combining these views with advanced correlation analysis, a fast assessment of ensemble consistency with observed data and physical understanding of the reservoir is then possible.
The application of the proposed methodology to real cases of ensemble history matching studies, shows that the approach is very effective in identifying if a specific initial ensemble has an adequate parameterization to start a successful computational loop of data assimilation. Insufficient variability, due to a poor capturing of the reservoir performance, can be investigated both at field and well scales by data analytics computations. The information contained in ensemble mismatches of relevant quantities like water-breakthrough and Gas-Oil-ratio is then evaluated in a systematic way. The analysis often reveals where and which uncertainties have not enough variability to explain historical data. It also allows to detect what is the role of apparently inconsistent parameters. In principle it is possible to activate the heavy iterative computation also with an initial ensemble where the analytics tools show potential difficulties and problems. However, experiences with large scale models point out that the possibility to obtain a good match in these situations is very low, leading to a time-consuming revision of the entire process. On the contrary, if the ensemble is validated, the iterative large-scale computations achieve a good calibration with a consistency that enables predictive ability.
As a new interesting feature of the proposed methodology, ensemble advanced data analytics techniques are able to give clues and suggestions regarding which parameters could be source of potential history matching problems in advance. In this way it is possible anticipate directly on initial ensemble the uncertainties revision for example modifying ranges, introducing new parameters and better tuning other ensemble factors, like localization and observations tolerances that controls the ultimate match quality.