Pub Date : 2019-09-09DOI: 10.7712/120219.6375.18838
Hamed Nikbakht, K. Papakonstantinou
Accurate and efficient estimation of rare events probabilities is of significant importance, since often the occurrences of such events have widespread impacts. The focus in this work is on precisely quantifying these probabilities, often encountered in reliability analysis of complex engineering systems, by introducing a gradient-based Hamiltonian Markov Chain Monte Carlo (HMCMC) framework, termed Approximate Sampling Target with Post-processing Adjustment (ASTPA). The basic idea is to construct a relevant target distribution by weighting the high-dimensional random variable space through a one-dimensional likelihood model, using the limit-state function. To sample from this target distribution we utilize HMCMC algorithms that produce Markov chain samples based on Hamiltonian dynamics rather than random walks. We compare the performance of typical HMCMC scheme with our newly developed Quasi-Newton based mass preconditioned HMCMC algorithm that can sample very adeptly, particularly in difficult cases with high-dimensionality and very small failure probabilities. To eventually compute the probability of interest, an original post-sampling step is devised at this stage, using an inverse importance sampling procedure based on the samples. The involved user-defined parameters of ASTPA are then discussed and general default values are suggested. Finally, the performance of the proposed methodology is examined in detail and compared against Subset Simulation in a series of static and dynamic low- and high-dimensional benchmark problems.
{"title":"A DIRECT HAMILTONIAN MCMC APPROACH FOR RELIABILITY ESTIMATION","authors":"Hamed Nikbakht, K. Papakonstantinou","doi":"10.7712/120219.6375.18838","DOIUrl":"https://doi.org/10.7712/120219.6375.18838","url":null,"abstract":"Accurate and efficient estimation of rare events probabilities is of significant importance, since often the occurrences of such events have widespread impacts. The focus in this work is on precisely quantifying these probabilities, often encountered in reliability analysis of complex engineering systems, by introducing a gradient-based Hamiltonian Markov Chain Monte Carlo (HMCMC) framework, termed Approximate Sampling Target with Post-processing Adjustment (ASTPA). The basic idea is to construct a relevant target distribution by weighting the high-dimensional random variable space through a one-dimensional likelihood model, using the limit-state function. To sample from this target distribution we utilize HMCMC algorithms that produce Markov chain samples based on Hamiltonian dynamics rather than random walks. We compare the performance of typical HMCMC scheme with our newly developed Quasi-Newton based mass preconditioned HMCMC algorithm that can sample very adeptly, particularly in difficult cases with high-dimensionality and very small failure probabilities. To eventually compute the probability of interest, an original post-sampling step is devised at this stage, using an inverse importance sampling procedure based on the samples. The involved user-defined parameters of ASTPA are then discussed and general default values are suggested. Finally, the performance of the proposed methodology is examined in detail and compared against Subset Simulation in a series of static and dynamic low- and high-dimensional benchmark problems.","PeriodicalId":153829,"journal":{"name":"Proceedings of the 3rd International Conference on Uncertainty Quantification in Computational Sciences and Engineering (UNCECOMP 2019)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129110322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-06-26DOI: 10.7712/120219.6346.18665
M. Moustapha, B. Sudret
Surrogate modelling has become an important topic in the field of uncertainty quantification as it allows for the solution of otherwise computationally intractable problems. The basic idea in surrogate modelling consists in replacing an expensive-to-evaluate black-box function by a cheap proxy. Various surrogate modelling techniques have been developed in the past decade. They always assume accommodating properties of the underlying model such as regularity and smoothness. However such assumptions may not hold for some models in civil or mechanical engineering applications, e.g., due to the presence of snap-through instability patterns or bifurcations in the physical behavior of the system under interest. In such cases, building a single surrogate that accounts for all possible model scenarios leads to poor prediction capability. To overcome such a hurdle, this paper investigates an approach where the surrogate model is built in two stages. In the first stage, the different behaviors of the system are identified using either expert knowledge or unsupervised learning, i.e. clustering. Then a classifier of such behaviors is built, using support vector machines. In the second stage, a regression-based surrogate model is built for each of the identified classes of behaviors. For any new point, the prediction is therefore made in two stages: first predicting the class and then estimating the response using an appropriate recombination of the surrogate models. The approach is validated on two examples, showing its effectiveness with respect to using a single surrogate model in the entire space.
{"title":"A TWO-STAGE SURROGATE MODELING APPROACH FOR THE APPROXIMATION OF MODELS WITH NON-SMOOTH OUTPUTS","authors":"M. Moustapha, B. Sudret","doi":"10.7712/120219.6346.18665","DOIUrl":"https://doi.org/10.7712/120219.6346.18665","url":null,"abstract":"Surrogate modelling has become an important topic in the field of uncertainty quantification as it allows for the solution of otherwise computationally intractable problems. The basic idea in surrogate modelling consists in replacing an expensive-to-evaluate black-box function by a cheap proxy. Various surrogate modelling techniques have been developed in the past decade. They always assume accommodating properties of the underlying model such as regularity and smoothness. However such assumptions may not hold for some models in civil or mechanical engineering applications, e.g., due to the presence of snap-through instability patterns or bifurcations in the physical behavior of the system under interest. In such cases, building a single surrogate that accounts for all possible model scenarios leads to poor prediction capability. To overcome such a hurdle, this paper investigates an approach where the surrogate model is built in two stages. In the first stage, the different behaviors of the system are identified using either expert knowledge or unsupervised learning, i.e. clustering. Then a classifier of such behaviors is built, using support vector machines. In the second stage, a regression-based surrogate model is built for each of the identified classes of behaviors. For any new point, the prediction is therefore made in two stages: first predicting the class and then estimating the response using an appropriate recombination of the surrogate models. The approach is validated on two examples, showing its effectiveness with respect to using a single surrogate model in the entire space.","PeriodicalId":153829,"journal":{"name":"Proceedings of the 3rd International Conference on Uncertainty Quantification in Computational Sciences and Engineering (UNCECOMP 2019)","volume":"2015 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131133504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-06-24DOI: 10.7712/120219.6373.18699
M. Angelis, S. Ferson, E. Patelli, V. Kreinovich
{"title":"BLACK-BOX PROPAGATION OF FAILURE PROBABILITIES UNDER EPISTEMIC UNCERTAINTY","authors":"M. Angelis, S. Ferson, E. Patelli, V. Kreinovich","doi":"10.7712/120219.6373.18699","DOIUrl":"https://doi.org/10.7712/120219.6373.18699","url":null,"abstract":"","PeriodicalId":153829,"journal":{"name":"Proceedings of the 3rd International Conference on Uncertainty Quantification in Computational Sciences and Engineering (UNCECOMP 2019)","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116223209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-06-24DOI: 10.7712/120219.6364.18502
Adolphus Lye, Alice Cicrello, E. Patelli
This paper seeks to analyze the imprecision associated with the statistical modelling method employed in devising a predictive maintenance framework on a plasma etching chamber. During operations, the plasma etching chamber may fail due to contamination as a result of a high number of particles that is present. Based on a study done, the particle count is observed to follow a Negative Binomial distribution model and it is also used to model the probability of failure of the chamber. Using this model, an optimum threshold failure probability is determined in which maintenance is scheduled once this value is reached during the operation of the chamber and that the maintenance cost incurred is the lowest. One problem however is that the parameter(s) used to define the Negative Binomial distribution may have uncertainties associated with it in reality and this eventually gives rise to uncertainty in deciding the optimum threshold failure probability. To address this, the paper adopts the use of Confidence structures (or C-boxes) in quantifying the uncertainty of the optimum threshold failure probability. This is achieved by introducing some variations in the p-parameter of the Negative Binomial distribution and then plotting a series of Cost-rate vs threshold failure probability curves. Using the information provided in these curves, empirical cumulative distribution functions are constructed for the possible upper and lower bounds of the threshold failure probability and from there, the confidence interval for the aforementioned quantity will be determined at 50%, 80%, and 95% confidence level.
{"title":"UNCERTAINTY QUANTIFICATION OF OPTIMAL THRESHOLD FAILURE PROBABILITY FOR PREDICTIVE MAINTENANCE USING CONFIDENCE STRUCTURES","authors":"Adolphus Lye, Alice Cicrello, E. Patelli","doi":"10.7712/120219.6364.18502","DOIUrl":"https://doi.org/10.7712/120219.6364.18502","url":null,"abstract":"This paper seeks to analyze the imprecision associated with the statistical modelling method employed in devising a predictive maintenance framework on a plasma etching chamber. During operations, the plasma etching chamber may fail due to contamination as a result of a high number of particles that is present. Based on a study done, the particle count is observed to follow a Negative Binomial distribution model and it is also used to model the probability of failure of the chamber. Using this model, an optimum threshold failure probability is determined in which maintenance is scheduled once this value is reached during the operation of the chamber and that the maintenance cost incurred is the lowest. One problem however is that the parameter(s) used to define the Negative Binomial distribution may have uncertainties associated with it in reality and this eventually gives rise to uncertainty in deciding the optimum threshold failure probability. To address this, the paper adopts the use of Confidence structures (or C-boxes) in quantifying the uncertainty of the optimum threshold failure probability. This is achieved by introducing some variations in the p-parameter of the Negative Binomial distribution and then plotting a series of Cost-rate vs threshold failure probability curves. Using the information provided in these curves, empirical cumulative distribution functions are constructed for the possible upper and lower bounds of the threshold failure probability and from there, the confidence interval for the aforementioned quantity will be determined at 50%, 80%, and 95% confidence level.","PeriodicalId":153829,"journal":{"name":"Proceedings of the 3rd International Conference on Uncertainty Quantification in Computational Sciences and Engineering (UNCECOMP 2019)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116270119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-06-24DOI: 10.7712/120219.6355.18709
C. Morais, K. Yung, E. Patelli
The capability of learning from accidents as quickly as possible allows preventing repeated mistakes to happen. This has been shown by the small time interval between two accidents with the same aircraft model: the Boeing 737-8 MAX. However, learning from major accidents and subsequently update the developed accident models has been proved to be a cumbersome process. This is because safety specialists use to take a long period of time to read and digest the information, as the accident reports are usually very detailed, long and sometimes with a difficult language and structure. A strategy to automatically extract relevant information from report accidents and update model parameters is investigated. A machine-learning tool has been developed and trained on previous expert opinion on several accident reports. The intention is that for each new accident report that is issued, the machine can quickly identify the more relevant features in seconds-instead of waiting for some days for the expert opinion. This way, the model can be more quickly and dynamically updated. An application to the preliminary accident report of the 2018 Lion Air accident is provided to show the feasibility of the machine-learning proposed approach.
{"title":"MACHINE-LEARNING TOOL FOR HUMAN FACTORS EVALUATION – APPLICATION TO LION AIR BOEING 737-8 MAX ACCIDENT","authors":"C. Morais, K. Yung, E. Patelli","doi":"10.7712/120219.6355.18709","DOIUrl":"https://doi.org/10.7712/120219.6355.18709","url":null,"abstract":"The capability of learning from accidents as quickly as possible allows preventing repeated mistakes to happen. This has been shown by the small time interval between two accidents with the same aircraft model: the Boeing 737-8 MAX. However, learning from major accidents and subsequently update the developed accident models has been proved to be a cumbersome process. This is because safety specialists use to take a long period of time to read and digest the information, as the accident reports are usually very detailed, long and sometimes with a difficult language and structure. A strategy to automatically extract relevant information from report accidents and update model parameters is investigated. A machine-learning tool has been developed and trained on previous expert opinion on several accident reports. The intention is that for each new accident report that is issued, the machine can quickly identify the more relevant features in seconds-instead of waiting for some days for the expert opinion. This way, the model can be more quickly and dynamically updated. An application to the preliminary accident report of the 2018 Lion Air accident is provided to show the feasibility of the machine-learning proposed approach.","PeriodicalId":153829,"journal":{"name":"Proceedings of the 3rd International Conference on Uncertainty Quantification in Computational Sciences and Engineering (UNCECOMP 2019)","volume":"123 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125768591","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-06-24DOI: 10.7712/120219.6351.18769
W. Edeling, D. Crommelin
It is well known that the wide range of spatial and temporal scales present in geophysical flow problems represents a (currently) insurmountable computational bottleneck, which must be circumvented by a coarse-graining procedure. The effect of the unresolved fluid motions enters the coarse-grained equations as an unclosed forcing term, denoted as the ’eddy forcing’. Traditionally, the system is closed by approximate deterministic closure models, i.e. so-called parameterizations. Instead of creating a deterministic parameterization, some recent efforts have focused on creating a stochastic, data-driven surrogate model for the eddy forcing from a (limited) set of reference data, with the goal of accurately capturing the long-term flow statistics. Since the eddy forcing is a dynamically evolving field, a surrogate should be able to mimic the complex spatial patterns displayed by the eddy forcing. Rather than creating such a (fully data-driven) surrogate, we propose to precede the surrogate construction step by a procedure that replaces the eddy forcing with a new model-error source term which: i) is tailor-made to capture spatially-integrated statistics of interest, ii) strikes a balance between physical insight and data-driven modelling , and iii) significantly reduces the amount of training data that is needed. Instead of creating a surrogate for an evolving field, we now only require a surrogate model for one scalar time series per statistical quantity-of-interest. Our current surrogate modelling approach builds on a resampling strategy, where we create a probability density function of the reduced training data that is conditional on (time-lagged) resolved-scale variables. We derive the model-error source terms, and construct the reduced surrogate using an ocean model of two-dimensional turbulence in a doubly periodic square domain.
{"title":"REDUCED MODEL-ERROR SOURCE TERMS FOR FLUID FLOW","authors":"W. Edeling, D. Crommelin","doi":"10.7712/120219.6351.18769","DOIUrl":"https://doi.org/10.7712/120219.6351.18769","url":null,"abstract":"It is well known that the wide range of spatial and temporal scales present in geophysical flow problems represents a (currently) insurmountable computational bottleneck, which must be circumvented by a coarse-graining procedure. The effect of the unresolved fluid motions enters the coarse-grained equations as an unclosed forcing term, denoted as the ’eddy forcing’. Traditionally, the system is closed by approximate deterministic closure models, i.e. so-called parameterizations. Instead of creating a deterministic parameterization, some recent efforts have focused on creating a stochastic, data-driven surrogate model for the eddy forcing from a (limited) set of reference data, with the goal of accurately capturing the long-term flow statistics. Since the eddy forcing is a dynamically evolving field, a surrogate should be able to mimic the complex spatial patterns displayed by the eddy forcing. Rather than creating such a (fully data-driven) surrogate, we propose to precede the surrogate construction step by a procedure that replaces the eddy forcing with a new model-error source term which: i) is tailor-made to capture spatially-integrated statistics of interest, ii) strikes a balance between physical insight and data-driven modelling , and iii) significantly reduces the amount of training data that is needed. Instead of creating a surrogate for an evolving field, we now only require a surrogate model for one scalar time series per statistical quantity-of-interest. Our current surrogate modelling approach builds on a resampling strategy, where we create a probability density function of the reduced training data that is conditional on (time-lagged) resolved-scale variables. We derive the model-error source terms, and construct the reduced surrogate using an ocean model of two-dimensional turbulence in a doubly periodic square domain.","PeriodicalId":153829,"journal":{"name":"Proceedings of the 3rd International Conference on Uncertainty Quantification in Computational Sciences and Engineering (UNCECOMP 2019)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123115752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-06-01DOI: 10.7712/120219.6327.18856
Sérgio Pereira, E. Reynders, F. Magalhães, Á. Cunha, J. Gomes
{"title":"TRACKING THE MODAL PARAMETERS OF THE BAIXO SABOR CONCRETE ARCH DAM WITH UNCERTAINTY QUANTIFICATION","authors":"Sérgio Pereira, E. Reynders, F. Magalhães, Á. Cunha, J. Gomes","doi":"10.7712/120219.6327.18856","DOIUrl":"https://doi.org/10.7712/120219.6327.18856","url":null,"abstract":"","PeriodicalId":153829,"journal":{"name":"Proceedings of the 3rd International Conference on Uncertainty Quantification in Computational Sciences and Engineering (UNCECOMP 2019)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121061977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-06-01DOI: 10.7712/120219.6334.18848
L. Bogaerts, M. Faes, D. Moens
{"title":"A MACHINE LEARNING APPROACH FOR THE INVERSE QUANTIFICATION OF SET-THEORETICAL UNCERTAINTY","authors":"L. Bogaerts, M. Faes, D. Moens","doi":"10.7712/120219.6334.18848","DOIUrl":"https://doi.org/10.7712/120219.6334.18848","url":null,"abstract":"","PeriodicalId":153829,"journal":{"name":"Proceedings of the 3rd International Conference on Uncertainty Quantification in Computational Sciences and Engineering (UNCECOMP 2019)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114506825","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-04-21DOI: 10.7712/120219.6343.18651
S. Dolgov, A. Litvinenko, Dishi Liu
Combination of low-tensor rank techniques and the Fast Fourier transform (FFT) based methods had turned out to be prominent in accelerating various statistical operations such as Kriging, computing conditional covariance, geostatistical optimal design, and others. However, the approximation of a full tensor by its low-rank format can be computationally formidable. In this work, we incorporate the robust Tensor Train (TT) approximation of covariance matrices and the efficient TT-Cross algorithm into the FFT-based Kriging. It is shown that here the computational complexity of Kriging is reduced to $mathcal{O}(d r^3 n)$, where $n$ is the mode size of the estimation grid, $d$ is the number of variables (the dimension), and $r$ is the rank of the TT approximation of the covariance matrix. For many popular covariance functions the TT rank $r$ remains stable for increasing $n$ and $d$. The advantages of this approach against those using plain FFT are demonstrated in synthetic and real data examples.
{"title":"KRIGING IN TENSOR TRAIN DATA FORMAT","authors":"S. Dolgov, A. Litvinenko, Dishi Liu","doi":"10.7712/120219.6343.18651","DOIUrl":"https://doi.org/10.7712/120219.6343.18651","url":null,"abstract":"Combination of low-tensor rank techniques and the Fast Fourier transform (FFT) based methods had turned out to be prominent in accelerating various statistical operations such as Kriging, computing conditional covariance, geostatistical optimal design, and others. However, the approximation of a full tensor by its low-rank format can be computationally formidable. In this work, we incorporate the robust Tensor Train (TT) approximation of covariance matrices and the efficient TT-Cross algorithm into the FFT-based Kriging. It is shown that here the computational complexity of Kriging is reduced to $mathcal{O}(d r^3 n)$, where $n$ is the mode size of the estimation grid, $d$ is the number of variables (the dimension), and $r$ is the rank of the TT approximation of the covariance matrix. For many popular covariance functions the TT rank $r$ remains stable for increasing $n$ and $d$. The advantages of this approach against those using plain FFT are demonstrated in synthetic and real data examples.","PeriodicalId":153829,"journal":{"name":"Proceedings of the 3rd International Conference on Uncertainty Quantification in Computational Sciences and Engineering (UNCECOMP 2019)","volume":"118 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123975881","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-02-19DOI: 10.7712/120219.6348.18409
J. Buist, B. Sanderse, Yous van Halder, B. Koren, Gertjan van Heijst
Multiphase flows are described by the multiphase Navier-Stokes equations. Numerically solving these equations is computationally expensive, and performing many simulations for the purpose of design, optimization and uncertainty quantification is often prohibitively expensive. A simplified model, the so-called two-fluid model, can be derived from a spatial averaging process. The averaging process introduces a closure problem, which is represented by unknown friction terms in the two-fluid model. Correctly modeling these friction terms is a long-standing problem in two-fluid model development. In this work we take a new approach, and learn the closure terms in the two-fluid model from a set of unsteady high-fidelity simulations conducted with the open source code Gerris. These form the training data for a neural network. The neural network provides a functional relation between the two-fluid model's resolved quantities and the closure terms, which are added as source terms to the two-fluid model. With the addition of the locally defined interfacial slope as an input to the closure terms, the trained two-fluid model reproduces the dynamic behavior of high fidelity simulations better than the two-fluid model using a conventional set of closure terms.
{"title":"MACHINE LEARNING FOR CLOSURE MODELS IN MULTIPHASE FLOW APPLICATIONS","authors":"J. Buist, B. Sanderse, Yous van Halder, B. Koren, Gertjan van Heijst","doi":"10.7712/120219.6348.18409","DOIUrl":"https://doi.org/10.7712/120219.6348.18409","url":null,"abstract":"Multiphase flows are described by the multiphase Navier-Stokes equations. Numerically solving these equations is computationally expensive, and performing many simulations for the purpose of design, optimization and uncertainty quantification is often prohibitively expensive. A simplified model, the so-called two-fluid model, can be derived from a spatial averaging process. The averaging process introduces a closure problem, which is represented by unknown friction terms in the two-fluid model. Correctly modeling these friction terms is a long-standing problem in two-fluid model development. In this work we take a new approach, and learn the closure terms in the two-fluid model from a set of unsteady high-fidelity simulations conducted with the open source code Gerris. These form the training data for a neural network. The neural network provides a functional relation between the two-fluid model's resolved quantities and the closure terms, which are added as source terms to the two-fluid model. With the addition of the locally defined interfacial slope as an input to the closure terms, the trained two-fluid model reproduces the dynamic behavior of high fidelity simulations better than the two-fluid model using a conventional set of closure terms.","PeriodicalId":153829,"journal":{"name":"Proceedings of the 3rd International Conference on Uncertainty Quantification in Computational Sciences and Engineering (UNCECOMP 2019)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131342671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}